Blog

Thoughts, ideas and codes

Pixelfed is the open source alternative for instagram. So if you want to post your pictuers online and let strangers see them (for whatever reason) but scared from Facebook (or should I say meta?) take a copy of your pictuers forever. You can self-host your own instagram thanks to pixelfed!

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

Initial setup

Before we go on docker-compose file we need first to build the image (pixelfed does not provide already built image sadly) so before we need to clone pixelfed's repo and build the image.

*. Clone pixelfed repo:

git clone https://github.com/pixelfed/pixelfed.git
cd pixelfed
git checkout dev

*. Now it's time to build pixelfed docker image:

docker build . -t pixelfed:latest -f contrib/docker/Dockerfile.apache

*. Now copy .env.example to .env.docker (this is our config file)

cp .env.example .env.docker

This is my .env.docker file:

## Crypto and we will generate this later
APP_KEY=

## General Settings
APP_NAME="Pixelfed Prod"
APP_ENV=production
APP_DEBUG=false
APP_URL=https://pixelfed.esmailelbob.xyz
APP_DOMAIN="pixelfed.esmailelbob.xyz"
ADMIN_DOMAIN="pixelfed.esmailelbob.xyz"
SESSION_DOMAIN="pixelfed.esmailelbob.xyz"

## !!!Make sure to enable it after you finish generating keys and migrating!!!
#ENABLE_CONFIG_CACHE=true

OPEN_REGISTRATION=false
ENFORCE_EMAIL_VERIFICATION=true
PF_MAX_USERS=1000
OAUTH_ENABLED=true

APP_TIMEZONE=UTC
APP_LOCALE=en

## Pixelfed Tweaks
LIMIT_ACCOUNT_SIZE=false
MAX_ACCOUNT_SIZE=1000000
MAX_PHOTO_SIZE=15000
MAX_AVATAR_SIZE=2000
MAX_CAPTION_LENGTH=500
MAX_BIO_LENGTH=125
MAX_NAME_LENGTH=30
MAX_ALBUM_LENGTH=4
IMAGE_QUALITY=80
PF_OPTIMIZE_IMAGES=true
PF_OPTIMIZE_VIDEOS=true
ADMIN_ENV_EDITOR=false
ACCOUNT_DELETION=true
ACCOUNT_DELETE_AFTER=false
MAX_LINKS_PER_POST=0

## Instance
#INSTANCE_DESCRIPTION=
INSTANCE_PUBLIC_HASHTAGS=true
#INSTANCE_CONTACT_EMAIL=
INSTANCE_PUBLIC_LOCAL_TIMELINE=true
#BANNED_USERNAMES=
STORIES_ENABLED=true
RESTRICTED_INSTANCE=false

## Mail
MAIL_DRIVER=log
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_FROM_ADDRESS="pixelfed@example.com"
MAIL_FROM_NAME="Pixelfed"
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null

## Databases (MySQL)
DB_CONNECTION=mysql
DB_DATABASE=pixelfed_prod
DB_HOST=db
DB_PASSWORD=pixelfed_db_pass
DB_PORT=3306
DB_USERNAME=pixelfed
# pass the same values to the db itself
MYSQL_DATABASE=pixelfed_prod
MYSQL_PASSWORD=pixelfed_db_pass
MYSQL_RANDOM_ROOT_PASSWORD=true
MYSQL_USER=pixelfed

## Databases (Postgres)
#DB_CONNECTION=pgsql
#DB_HOST=postgres
#DB_PORT=5432
#DB_DATABASE=pixelfed
#DB_USERNAME=postgres
#DB_PASSWORD=postgres

## Cache (Redis)
REDIS_CLIENT=phpredis
REDIS_SCHEME=tcp
REDIS_HOST=redis
REDIS_PASSWORD=redis_password
REDIS_PORT=6379
REDIS_DATABASE=0

## EXPERIMENTS 
EXP_LC=false
EXP_REC=false
EXP_LOOPS=false

## ActivityPub Federation
ACTIVITY_PUB=true
AP_REMOTE_FOLLOW=true
AP_SHAREDINBOX=true
AP_INBOX=true
AP_OUTBOX=true
ATOM_FEEDS=true
NODEINFO=true
WEBFINGER=true

## S3
FILESYSTEM_DRIVER=local
FILESYSTEM_CLOUD=s3
PF_ENABLE_CLOUD=false
#AWS_ACCESS_KEY_ID=
#AWS_SECRET_ACCESS_KEY=
#AWS_DEFAULT_REGION=
#AWS_BUCKET=
#AWS_URL=
#AWS_ENDPOINT=
#AWS_USE_PATH_STYLE_ENDPOINT=false

## Horizon
HORIZON_DARKMODE=true

## COSTAR - Confirm Object Sentiment Transform and Reduce
PF_COSTAR_ENABLED=false

# Media
MEDIA_EXIF_DATABASE=false

## Logging
LOG_CHANNEL=stderr

## Image
IMAGE_DRIVER=imagick

## Broadcasting
BROADCAST_DRIVER=log  # log driver for local development

## Cache
CACHE_DRIVER=redis

## Purify
RESTRICT_HTML_TYPES=true

## Queue
QUEUE_DRIVER=redis

## Session
SESSION_DRIVER=redis

## Trusted Proxy
TRUST_PROXIES="*"

## Passport
#PASSPORT_PRIVATE_KEY=
#PASSPORT_PUBLIC_KEY=

And edit .env.docker file as needed but make sure to change APP_DOMAIN, ADMIN_DOMAIN and SESSION_DOMAIN to your pixelfed domain (ex: pixelfed.esmailelbob.xyz) and change APP_URL to https:// + your pixelfed domain (ex: https://pixelfed.esmailelbob.xyz)

pixelfed docker-compose file

After we finish with .env.docker file, We need a docker-compose.yml file so we can start pixelfed, for me I use this file:

---
version: '3'

# In order to set configuration, please use a .env file in
# your compose project directory (the same directory as your
# docker-compose.yml), and set database options, application
# name, key, and other settings there.
# A list of available settings is available in .env.example
#
# The services should scale properly across a swarm cluster
# if the volumes are properly shared between cluster members.

services:
## App and Worker
  app:
    # Comment to use dockerhub image
    build:
      context: .
      dockerfile: contrib/docker/Dockerfile.apache
    image: pixelfed:latest
    restart: unless-stopped
    env_file:
      - .env.docker
    volumes:
     # - app-storage:/var/www/storage
      - "./PIXELFED_DATA:/var/www/storage"
      - app-bootstrap:/var/www/bootstrap
      - "./.env.docker:/var/www/.env"
    networks:
      - external
      - internal
    ports:
      - "127.0.0.1:8282:80"
    depends_on:
      - db
      - redis

  worker:
    build:
      context: .
      dockerfile: contrib/docker/Dockerfile.apache
    image: pixelfed:latest
    restart: unless-stopped
    env_file:
      - .env.docker
    volumes:
     # - app-storage:/var/www/storage
      - "./PIXELFED_DATA:/var/www/storage"
      - app-bootstrap:/var/www/bootstrap
    networks:
      - external
      - internal
    command: gosu www-data php artisan horizon
    depends_on:
      - db
      - redis

## DB and Cache
  db:
    image: mysql:8.0
    restart: unless-stopped
    networks:
      - internal
    command: --default-authentication-plugin=mysql_native_password
    env_file:
      - .env.docker
    volumes:
      - "db-data:/var/lib/mysql"

  redis:
    image: redis:5-alpine
    restart: unless-stopped
    env_file:
      - .env.docker
    volumes:
      - "redis-data:/data"
    networks:
      - internal

volumes:
  db-data:
  redis-data:
  app-storage:
  app-bootstrap:

networks:
  internal:
    internal: true
  external:
    driver: bridge

Optional stuff to change: ./PIXELFED_DATA: This is where our photos will be saved at 127.0.0.1:8282: Change this if you want to change port of pixelfed

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors.

Then we need to run couple of commands to generate keys (so we can edit pixelfed from gui later) and to create database! So first we will generate keys run:

docker-compose exec app php artisan key:generate
docker-compose exec app php artisan passport:keys
docker-compose exec app php artisan route:cache
docker-compose exec app php artisan cache:clear

And make sure we have the keys:

cat .env.docker | grep APP_KEY

Now time for the db (database):

docker-compose restart app
docker-compose exec app php artisan config:cache
docker-compose exec app php artisan migrate

NOTE: If migration fails, take it down (by docker-compose down) and then start it again (by docker-compose up -d) and re-run the migration command (docker-compose exec app php artisan migrate)

Last but not least, it's time to create an account:

docker-compose exec app php artisan user:create

NOTE: to enabe edit settings of pixelfed from pixelfed website add this to .env.docker:

ENABLE_CONFIG_CACHE=true

Nginx

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for pixelfed:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:8282/;
       }
}

server_name: Change this to match domain name of pixelfed include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for pixelfed! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

git pull
docker-compose down
docker build . -t pixelfed:latest -f contrib/docker/Dockerfile.apache
docker-compose up -d

What it does is: 1) Pull updates from github, 2) Stops the container, 3) Build pixelfed image and 4) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker

Gitea is like GitHub and Gitlab. A git backend with a web gui with a eye candy (*Just like GitHub). To be honest I have nothing to say as I think it's self explained.

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

System Requirements

CPU: 2 CPU cores RAM: 1GB

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

Initial setup

We need a docker-compose.yml file but first we need to do some initial setup first so: . First we need to create a new user account on our VPS (host machine*) and we will name that user git and Add a password for it:

sudo useradd -m git
sudo passwd git
*. Second we login using the git user:
```bash
su git

And we now need to run these commands (save their output for later):

echo $(id -u)
echo $(id -g)

Now get back (press ctrl+d)

gitea docker-compose file

Now we prepare our docker-compose.yml file:

version: "3"

networks:
    gitea:
        external: false

services:
  server:
    image: gitea/gitea
    container_name: gitea
    environment:
        - USER_UID=1001 # Enter the UID found from previous command output
        - USER_GID=1001 # Enter the GID found from previous command output
        - GITEA__database__DB_TYPE=mysql
        - GITEA__database__HOST=db:3306
        - GITEA__database__NAME=gitea
        - GITEA__database__USER=gitea
        - GITEA__database__PASSWD=giteaaa
        - GNUPGHOME=/data/git/.gnupg/
    restart: always
    networks:
        - gitea
    volumes:
        - ./data:/data
        - /etc/timezone:/etc/timezone:ro
        - /etc/localtime:/etc/localtime:ro
        - /home/git/.ssh/:/data/git/.ssh
    ports:
        - "127.0.0.1:3330:3000"
        - "127.0.0.1:2222:22"
    depends_on:
        - db

  db:
    image: mysql
    restart: always
    environment:
        - MYSQL_ROOT_PASSWORD=gitea
        - MYSQL_USER=gitea
        - MYSQL_PASSWORD=gitea
        - MYSQL_DATABASE=gitea
    networks:
        - gitea
    volumes:
        - ./mysql:/var/lib/mysql

Required stuff to change: USER_UID: Change this to what number we got from running echo $(id -u) command as the git user USER_GID: Change this to what number we got from running echo $(id -g) command as the git user MYSQL_PASSWORD and MYSQLROOTPASSWORD: These passwords for our gitea database so you know, change them? Optional stuff to change: 127.0.0.1:3330:3000: Change this to server gitea online (using nginx later) 127.0.0.1:2222:22: This is needed if we will use ssh to clone our repos – ./data:/data: This is where our config files and other needed data will be saved for gitea – /home/git/.ssh/:/data/git/.ssh: This is for ssh, We will create a ssh key later for the git user – GNUPGHOME=/data/git/.gnupg/: This is where git will look for gpg keys to sign commits, since gitea 1.17 they changed folder, if you will start fresh install you would not need it but for someone like me I had to change where folder located

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

nginX

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for gitea:

server {
        listen [::]:80;
        listen 80;
       
                server_name [domain name] ; ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:3330/;
       }
}

server_name: Change this to match domain name of gitea include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

Configure Gitea in web gui

After we run it, visit gitea in your browser and we really do not need to change anything. You can change settings of “Administrator Account Settings”, “Server and Third-Party Service Settings” and “Email Settings” you will find these under “Optional Settings” Section at end of the page but other than that I really recommend to not change other settings like gitea base URL because after some trial and error I noticed when I change that later in app.ini file (more about that later) gitea actually work and I can clone fine.

So after we done we need to edit app.ini (If you used my docker file, it should be located in: data/gitea/conf/app.ini) and change DOMAIN and SSH_DOMAIN to our gitea domain name, for my case it was git.esmailelbob.xyz and change ROOT_URL to be “https://” + our gitea domain name so it would look like https://git.esmailelbob.xyz/ in my case. Now restart docker-compose (docker-compose down; docker-compose up -d) and you are good to go :)

After this you should be up and running for gitea! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

Now we enable SSH for our gitea instance – This is Optional

If you want to enable SSH clone. It's easy to enable for gitea docker.

Host machine, VPS or server

We need to do all of these steps on our VPS side.

. Make sure we already mapped ssh port in our docker compose file (If you follow along, you already done this*)

ports:
     [...]
  - "127.0.0.1:2222:22"

. Make sure we already added UID and GID of git user in our docker compose file (If you follow along, you already done this*)

environment:
  - USER_UID=1000 # this is for example, please change it
  - USER_GID=1000

*. We need to mount .ssh of git user inside docker-compose file so this ensure that both git user on our Host or VPS and git user inside docker-compose both allow access of our SSH keys (If you follow along, you already done this)

volumes:
  - /home/git/.ssh/:/data/git/.ssh

. Now generate SSH key for gitea itself. This key pair will be used to authenticate the git user on the host to the container (no need to switch to git for this*)

sudo -u git ssh-keygen -t rsa -b 4096 -C "Gitea Host Key"

*. Now copy SSH key we created for git user to authorized_keys ( Again no need to change to user to use git on host or VPS ) so both git user and git user inside docker of gitea get same copy of authorized ssh keys:

sudo -u git cat /home/git/.ssh/id_rsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys

sudo -u git chmod 600 /home/git/.ssh/authorized_keys

*. Now we can view /home/git/.ssh/authorized_keys (cat /home/git/.ssh/authorized_keys) and make sure it looks like:

# SSH pubkey from git user
ssh-rsa <Gitea Host Key>

*. Now we need to create a executable script:

cat <<"EOF" | sudo tee /usr/local/bin/gitea

#!/bin/sh

ssh -p 2222 -o StrictHostKeyChecking=no git@127.0.0.1 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"

EOF

sudo chmod +x /usr/local/bin/gitea

This script forward commands from git host to gitea container

Now get back to our client or desktop

We need to do all of these steps on our desktop or own PC side.

So now on PC simply create a ssh key:

ssh-keygen -t ECDSA

And go to ~/.ssh/` and get the public key of your SSH key and login to your gitea and in settings ([gitea domain url]/user/settings) import your SSH key. And create a repo for test and try to clone it over SSH :)

For more info please get back to gitea docs at https://docs.gitea.io/en-us/install-with-docker/#sshing-shim-with-authorized_keys

Now we enable GPG commit sign for our gitea instance – This is Optional

If you want to see that sweet little green lock beside your commits and to let people know that it was really you who made those changes, You need to enable GPG key signing inside gitea and it's simple!

First we need to login inside docker itself as the git user (do not mix it up with git user on our host machine) to do so just type:

docker exec -it -u git gitea bash

Now do not panic you are inside gitea docker container as the git user! we simply need to generate a gpg key pair which is simple as:

gpg --full-generate-key

Answer questions and make sure to type name and email right as we need to use them later! * If you get permissions error (gpg: WARNING: unsafe permissions on homedir '/home/path/to/user/.gnupg) you might want to try

chown -R $(whoami) data/git/.gnupg/
chmod 600 ~/.gnupg/* data/gitea/home/.gnupg/
chmod 700 ~/.gnupg data/gitea/home/.gnupg/

data/git/.gnupg/: Is where .gnupg folder saved inside docker container, If you used same setup as mine you do not have to worry but if you changed volumes you might want to search where it's saved in your case!

After we done. You can run:

gpg --list-secret-keys

To list created keys and note their Id, name and email (we need them for later)

Now, logout of container (press ctrl+d or type exit) and now edit app.ini file (data/gitea/conf/app.ini) and paste (this is my setup i use):

[repository.signing]
DEFAULT_TRUST_MODEL = collaboratorcommitter
SIGNING_KEY         = defualt
SIGNING_NAME        = gitea
SIGNING_EMAIL       = gitea@esmailelbob.xyz
INITIAL_COMMIT      = always
CRUD_ACTIONS        = always
WIKI                = always
MERGES              = always

SIGNING_KEY: Leave it as is (more on that later). SIGNING_NAME: Type same name you typed while you were creating the GPG key SIGNING_EMAIL: Type same email you typed while you were creating the GPG key

Now you need to restart docker (docker-compose down; docker-compose up -d) and go to your git domain.com/api/v1/signing-key.gpg (Ex: git.esmailelbob.xyz/api/v1/signing-key.gpg) and make sure you see a public gpg key displayed, If you see an empty page try to change SIGNING_KEY in app.ini to key's ID itself not default.

Now we need to login back in docker as git user (docker exec -it -u git gitea bash) and we need to create a .gitconfig file in data/git/.gitconfig (Again, if you followed my docker compose setup it should be in same order so do not worry but if you changed volumes then you need to search where git folder saved) and your .gitconfig file it should look like:

[user]
        email = git@esmailelbob.xyz
        name = gitea
        signingkey = 55B46434BB81637F
[commit]
        gpgsign = true
[gpg]
        program = /usr/bin/gpg
[core]
        quotepath = false
        commitGraph = true
[gc]
        writeCommitGraph = true
[receive]
        advertisePushOptions = true
        procReceiveRefs = refs/for

What need to change are: email: Type email that you typed while creating gpg. Should match your GPG key we created name: Type name that you typed while creating gpg. Should match your GPG key we created signingkey: Your GPG key ID that we created

Now leave gitea container bash and restart docker (docker-compose down; docker-compose up -d) and now give it a try :). Make a test repo and try to commit stuff and you should see the magic green lock

NOTE 1: After you make key, export it's public key and add it inside your gitea account in settings ([gitea domain url]/user/settings) .

NOTE 2: If you want to for example only sign commits if user has gpg key in their account or never commits at all you can do that, please get back to gitea docs to see the other options but for me I wanted it to ALWAYS sign commits

For more info please get back to gitea docs at https://docs.gitea.io/en-us/signing/

NOTE 3: It's not related to gitea but it's related to gpg and git. On your PC if you want to enable gpg sign too: *. Generate gpg key (gpg --full-generate-key) and grab it's ID, name and email for later *. Edit .gitconfig (~/.gitconfig) file on your own desktop (not VPS/Host machine) to make it look like:

[filter "lfs"]
        clean = git-lfs clean -- %f
        smudge = git-lfs smudge -- %f
        process = git-lfs filter-process
        required = true
[user]
        name = Esmail EL BoB
        email = esmail@esmailelbob.xyz
        signingkey = 4984C22F0C5CACDE73B05243F44C953A3C7A4E16
[http]
        sslBackend = openssl
[commit]
        gpgsign = true

And change name, email and signingkey to same info you added while you were creating gpg key . List GPG keys installed in your Desktop (gpg --list-secret-keys) and view public key of our GPG key we just created (gpg --export --armor [key-id]) and Add GPG public key to your gitea account via settings ([gitea domain url]/user/settings*).

Now you would be able to push commits and sign them automatically to gitea, github or any git really

Add more theme options in gitea – This is Optional

If you want to add more themes for gitea docker. We need to know what is our CustomPath and if you follow along it should be data/gitea. So to add themes we need to get .css file and to tell app.ini (config file of gitea) what themes to enable so later we can select them from gitea webgui in settings. So first let's created needed folders. Go to data/gitea and create a new folder called public and cd into it and create new folder called css so order would look like: data/gitea/public/css

cd data/gitea
mkdir public
cd public
mkdir css
cd css

Now It's time for .css files, To do so we can search online for gitea themes or visit: https://gitea.com/gitea/awesome-gitea#user-content-themes to get some files for test.

We should be already in css folder so select .css file you want and download it using wget:

wget [theme url]

Now it's time to edit app.ini to tell it to enable the theme(s) we downloaded in css folder! so open app.ini (should be in data/gitea/conf/app.ini) and paste:

[ui]
DEFAULT_THEME = gitea
THEMES = gitea,arc-green,plex,aquamarine,dark,dracula,hotline,organizr,space-gray,hotpink,onedark,overseerr,nord,earl-grey,github,github-dark

DEFAULT_THEME: Is default theme for all users and it's okay to leave as is really THEMES: here list all of our downloaded themes, To know theme name you need to look at css file so it look like: theme-github.css here our theme name is github

Now restart docker (docker-compose down; docker-compose up -d) and go to gitea and edit your settings (click on your profile picture from upper-right > click settings > select appearance from top bar – url should look like: [gitea domain]/user/settings/appearance) and select the theme you want and click “Update Theme” and you should be good to go :) – If you see nothing changed it means you either downloaded them in wrong folder or typed it's name wrong in app.ini so re-check it!

For more info please get back to gitea docs: https://docs.gitea.io/en-us/install-with-docker/#customization

#howto #selfhost #docker

Nextcloud is a self-hosted website that you can backup or sync your files on. Just like google drive or Microsoft one drive EXCEPT nextcloud got more tools or “APPs” so you can integrate your own jitsi like server so talk video with people using your nextcloud instance or backup your files with true end-to-end encryption (*It's easy to enable in nextcloud).

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

System Requirements

RAM: 128 – 512 MB

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

nextcloud docker-compose file

We need a docker-compose.yml file so we can start nextcloud, for me I use this file:

version: '2'

volumes:
  nextcloud:
  db:

services:
  db:
    image: mariadb:10.5
    restart: always
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
    volumes:
      - db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=NC
      - MYSQL_PASSWORD=NC
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud

  app:
    image: nextcloud
    restart: always
    container_name: nextcloud
    ports:
      - 127.0.0.1:8585:80
    links:
      - db
    volumes:
      - nextcloud:/var/www/html
      - ./NEXTCLOUD_DATA/:/var/www/html/data
      - ./config:/var/www/html/config
      - ./php.ini:/usr/local/etc/php/php.ini
    environment:
      - MYSQL_PASSWORD=NC
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud
      - MYSQL_HOST=db

Required stuff to change: MYSQL_PASSWORD: Change this to any other password MYSQLROOTPASSWORD: Same as MYSQLPASSWORD Optional stuff to change: **./NEXTCLOUDDATA/**: This is where our uploaded files in nextcloud will be saved in 127.0.0.1:8585:8080: Change this if you want to change port of nextcloud

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

nginX

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for nextcloud:

server {
        listen [::]:80;
        listen 80;
       
        server_name [nextcloud domain name] ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:8585/;
       }
}

server_name: Change this to match domain name of nextcloud include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

Configure Nextcloud

After we run it, visit nextcloud in your browser and really all we need to change is type our account data and if you are going to use this for personal or small usage you can select SQLite 3 setup

After this you should be up and running for nextcloud! :) just do not forget to run certbot --nginx to make it secure with https://

NOTE: to edit config in nextcloud, you would fine config.php in data/config.php

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

Setup Collabora online to edit documents online – Optional

So as a nextcloud user you now want to use it as much as you can in anyways! So now I will show you how to enable collabora online to edit right inside nextcloud!

Get collabora running

It's simple, open nextcloud docker compose file and add this blog to it:

  collabora:
    image: collabora/code:latest
    container_name: collabora
    restart: unless-stopped
    cap_add:
     - MKNOD
    ports:
      - 127.0.0.1:9980:9980
    environment:
      - domain=cloud.esmailelbob.xyz
      - username=username
      - password=password
      - extra_params=--o:ssl.enable=true --o:ssl.termination=true
      - dictionaries=en_US ar_EG

username & password: Enable this as sort of auth or login when you use your collabora server so make sure to change this domain: replace this with your nextcloud's domain name dictionaries: This is the dictionary. It tells collabora about in what language we will write or use it for so in my case it would be Arabic and English. This is Optional so you can delete this if you are not sure and it would load all dictionaries for all languages or define more than language to load up their dictionary 127.0.0.1:9980: On what port collabora will work And simple docker restart will download collabora image and set you up and running

Nginx proxy block

Now, this block is not a server block. it's a proxy block as you see so simply put this inside an exsiting block (so let's say run both nextcloud and collabora under same URL) or create a new sub domain and add it inside it (office.esmailelbob.xyz for example)

 # static files
 location ^~ /browser {
   proxy_pass https://127.0.0.1:9980;
   proxy_set_header Host $http_host;
 }


 # WOPI discovery URL
 location ^~ /hosting/discovery {
   proxy_pass https://127.0.0.1:9980;
   proxy_set_header Host $http_host;
 }


 # Capabilities
 location ^~ /hosting/capabilities {
   proxy_pass https://127.0.0.1:9980;
   proxy_set_header Host $http_host;
 }


 # main websocket
 location ~ ^/cool/(.*)/ws$ {
   proxy_pass https://127.0.0.1:9980;
   proxy_set_header Upgrade $http_upgrade;
   proxy_set_header Connection "Upgrade";
   proxy_set_header Host $http_host;
   proxy_read_timeout 36000s;
 }


 # download, presentation and image upload
 location ~ ^/(c|l)ool {
   proxy_pass https://127.0.0.1:9980;
   proxy_set_header Host $http_host;
 }


 # Admin Console websocket
 location ^~ /cool/adminws {
   proxy_pass https://127.0.0.1:9980;
   proxy_set_header Upgrade $http_upgrade;
   proxy_set_header Connection "Upgrade";
   proxy_set_header Host $http_host;
   proxy_read_timeout 36000s;
 }

Of course do not forget to update port number in here IF you changed it inside docker compose, but if you just follow along, this step is not needed

Nextcloud configuration

Now we go to Nextcloud, Install new app Nextcloud Office and go to settings, Under Administration You will find Office click on it and type your collabora's URL and click save and it should say connected :)

NOTE: to write your server URL (and you have enabled username and password option), Its format will look like: username:password@URL of collabora so type this in server URL inside nextcloud office settings

Change Background jobs from AJAX to Cron – Optional

AJAX is good but it's not really reliable and I noticed problems with it, So to change it to cron job (which is far better). Simple add this command as a cron job:

docker exec -u www-data nextcloud php cron.php

nextcloud: this the name or id of nextcloud container, If you use same docker-compose as mine then it's called nextcloud *

and add this command as cron job but first open crontab to edit:

crontab -e

and paste:

*/5  *  *  *  * docker exec -u www-data nextcloud php cron.php

to run this every 5m

#howto #selfhost #docker

searX (or as people call it, search) Is a meta search engine, Means that searX takes search results from websites like duckduckgo, startpage and google and display them in searx so none of these websites can log your IP or your search query

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

System Requirements

RAM: 512 MB

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

searx docker-compose file

We need a docker-compose.yml file so we can start searx, for me I use this file:

version: '3.7'

services:

  searx:
    image: searx/searx:latest
    container_name: searx
    restart: unless-stopped
    ports:
      - '127.0.0.1:8787:8080'
    volumes:
      - './data/searx:/etc/searx'
    environment:
      - BASE_URL=https://searx.esmailelbob.xyz/
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - SETGID
      - SETUID
      - DAC_OVERRIDE

Required stuff to change: BASE_URL: Change this to your own domain name Optional stuff to change: './data/searx:: This is where our config files and other needed data will be stored for searx. For me I left it inside searx root folder 127.0.0.1:8787:8080: Change this if you want to change port of searx

NOTE: Your config file is saved in data/searx/settings.yml so open it up and change settings you need to change! nano data/searx/settings.yml

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

nginX

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for searx:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:8787/;
       }
}

server_name: Change this to match domain name of searx include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for searx! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker

whoogle is a search engine like website. It let's you pull results from google.com but without letting google log your search results and IP. It's a proxy for google search engine but it better as it's light, fast and protect you from evil google.

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

whoogle docker-compose file

We need a docker-compose.yml file so we can start whoogle, for me I use this file:

# cant use mem_limit in a 3.x docker-compose file in non swarm mode
# see https://github.com/docker/compose/issues/4513
version: "2.4"

services:
  whoogle-search:
    image: ${WHOOGLE_IMAGE:-benbusby/whoogle-search}
    container_name: whoogle-search
    restart: unless-stopped
    pids_limit: 50
    mem_limit: 256mb
    memswap_limit: 256mb
    # user debian-tor from tor package
    user: whoogle
    security_opt:
      - no-new-privileges
    cap_drop:
      - ALL
    tmpfs:
      - /config/:size=10M,uid=927,gid=927,mode=1700
      - /var/lib/tor/:size=10M,uid=927,gid=927,mode=1700
      - /run/tor/:size=1M,uid=927,gid=927,mode=1700
    environment: # Uncomment to configure environment variables
      # Basic auth configuration, uncomment to enable
      # - WHOOGLE_USER=<auth username>
      # - WHOOGLE_PASS=<auth password>
      # Proxy configuration, uncomment to enable
      # - WHOOGLE_PROXY_USER=<proxy username>
      # - WHOOGLE_PROXY_PASS=<proxy password>
      # - WHOOGLE_PROXY_TYPE=<proxy type (http|https|socks4|socks5)
      # - WHOOGLE_PROXY_LOC=<proxy host/ip>
      # Site alternative configurations, uncomment to enable
      # Note: If not set, the feature will still be available
      # with default values.
       - WHOOGLE_ALT_TW="nitter.esmailelbob.xyz"
       - WHOOGLE_ALT_YT="invidious.esmailelbob.xyz"
       - WHOOGLE_ALT_IG="bibliogram.esmailelbob.xyz/u"
       - WHOOGLE_ALT_RD="libreddit.esmailelbob.xyz"
      # - WHOOGLE_ALT_MD=farside.link/scribe
       - WHOOGLE_ALT_TL="lingva.esmailelbob.xyz"
       - WHOOGLE_ALT_IMG=imgin.voidnet.tech
       - WHOOGLE_ALT_WIKI=wikiless.org
       - WHOOGLE_CONFIG_DISABLE=0
       - WHOOGLE_CONFIG_COUNTRY=US
       - WHOOGLE_CONFIG_LANGUAGE=lang_en
       - WHOOGLE_CONFIG_SEARCH_LANGUAGE=lang_en
       - WHOOGLE_CONFIG_THEME=dark
       - WHOOGLE_CONFIG_ALTS=1
       - WHOOGLE_CONFIG_TOR=0
       - WHOOGLE_CONFIG_NEW_TAB=1
       - WHOOGLE_CONFIG_VIEW_IMAGE=1
       - WHOOGLE_AUTOCOMPLETE=1
       - EXPOSE_PORT=5000
       - WHOOGLE_CONFIG_URL=https://whoogle.esmailelbob.xyz/

    #env_file: # Alternatively, load variables from whoogle.env
      #  - ./whoogle.env
    ports:
      - 127.0.0.1:5005:5000

Optional stuff to change: environment:: Under this we can set here what settings we desire 127.0.0.1:5050:5000: Change this if you want to change port of whoogle

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

nginX

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for whoogle:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:5050/;
       }
}

server_name: Change this to match domain name of whoogle include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for whoogle! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker

rimgo allows you to view imgur photos and videos in safe enviroment as it acts like proxy for imgur. So you can view photos but without let imgur view your IP ;)

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

rimgo docker-compose file

We need a docker-compose.yml file so we can start rimgo, for me I use this file:

version: '3'

services:
  rimgo:
    image: quay.io/pussthecatorg/rimgo
    ports:
      - 127.0.0.1:4040:3000
    volumes:
      - ./config.yml:/app/config.yml
    restart: unless-stopped

Optional stuff to change: 127.0.0.1:4040:3000: Change this if you want to change port number

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

nginX

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for rimgo:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:4040/;
       }
}

server_name: Change this to match domain name of rimgo include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for rimgo! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker

lingva is a translate website. It helps you to translate words you do not understand and it's translations are actually good, want to know why? because it pulls from google itself! so it's more like a proxy for google translate but better becase it stops google from spying on you and you can make an onion link and use it over tor browser :) (For more info visit: https://blog.esmailelbob.xyz/how-to-mirror-your-website-over-tor-network)

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

lingva docker-compose file

We need a docker-compose.yml file so we can start lingva, for me I use this file:

version: '3'

services:

  lingva:
    container_name: lingva
    image: thedaviddelta/lingva-translate:latest
    restart: unless-stopped
    environment:
      - site_domain=lingva.esmailelbob.xyz
      - dark_theme=true
    ports:
      - 127.0.0.1:3004:3000

Required stuff to change: site_domain: This is our domain name for lingva

Optional stuff to change: 127.0.0.1:3004:3000: Change this if you want to change port of lingva dark_theme: Make this true if you want your website to be in dark theme by default

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

nginX

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for lingva:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:3004/;
       }
}

server_name: Change this to match domain name of lingva include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for lingva! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker

Invidious is a free and open source project made to view youtube videos without ads and without youtube/google track your video watch. It's more like a proxy for youtube (but better)

It's easy to host it so before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

Invidious docker-compose file

We need a docker-compose.yml file so we can start invidious, for me I use this file:

version: "2.4"
services:
  postgres:
    image: postgres:10
    restart: always
    networks:
      - invidious
    volumes:
      - postgresdata:/var/lib/postgresql/data
      - ./config/sql:/config/sql
      - ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
    environment:
      POSTGRES_DB: invidious
      POSTGRES_USER: kemal
      POSTGRES_PASSWORD: kemal
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
  invidious:
    image: quay.io/invidious/invidious:latest
    restart: always
    networks:
      - invidious
    mem_limit: 1024M
    cpus: 0.5
    ports:
      - "127.0.0.1:3030:3000"
    environment:
      INVIDIOUS_CONFIG: |
        channel_threads: 1
        check_tables: true
        feed_threads: 1
        db:
          dbname: invidious
          user: kemal
          password: kemal
          host: postgres
          port: 5432
        full_refresh: false
        https_only: false
        domain: invidious.esmailelbob.xyz
        registration_enabled: false
        banner: |
           <p>Help me keep running this service by donating: <a href="https://donate.esmailelbob.xyz"><i>https://donate.esmailelbob.xyz</i></a></p>
        admins: ["esmailelbob"]
        captions: ["English", "English (auto-generated)", "Arabic"]
        dark_mode: true
        annotations: true
        comments: ["youtube", "reddit"]
        player_style: youtube
        quality: dash
        quality_dash: worst
        local: false
        statistics_enabled: true
        external_port: 4040
      # external_port:
    healthcheck:
      test: wget -nv --tries=1 --spider http://127.0.0.1:3030/api/v1/comments/jNQXAC9IVRw || exit 1
      interval: 30s
      timeout: 5s
      retries: 2
    depends_on:
      - postgres
  autoheal:
    restart: always
    image: willfarrell/autoheal
    environment:
      - AUTOHEAL_CONTAINER_LABEL=all
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

volumes:
  postgresdata:

networks:
  invidious:

Optional stuff to change: 127.0.0.1:3040:3000: Change this if you want to change port of invidious INVIDIOUS_CONFIG: Here we can add settings for example disable login or disable people from making new accounts and so on, If you want to learn more you can get back to: https://github.com/iv-org/invidious/blob/master/config/config.example.yml

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

nginX

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for invidious:

limit_req_zone $binary_remote_addr zone=one:50m rate=50r/m;
limit_conn_zone $binary_remote_addr zone=addr:50m;

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:3030/;
       }
}

server_name: Change this to match domain name of invidious include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for invidious! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker

Scribe is a proxy for medium.com website. If you are like me live in country that blocks medium for exampel or even do not want to let medium take your IP and record it. You can self host scribe! (It's like invidious for yt)

So before we start I will assume some things: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

Prepair

This time we do not have a docker-compose.yml file! instead we have a docker file and it runs a little different from docker-compose so to install we need to to git clone the repo and just run our little docker command. It took me some trial and error to sattle on the perfect command line options so yup! We need to do is:

  • Clone the repo:

    git clone https://git.sr.ht/~edwardloveall/scribe
    cd scribe
    
  • Build docker image:

    docker build -t scribe:latest -f ./Dockerfile .
    
  • Now, we run:

    docker run -d -it --rm -p 127.0.0.1:6666:8080 -e LUCKY_ENV=production -e APP_DOMAIN=scribe.esmailelbob.xyz  -e SCRIBE_HOST=0.0.0.0 -e DATABASE_URL=postgres://does@not/matter -e SECRET_KEY_BASE="sqnmkgmmmwrgubwvoohscvmafxrauufd" -e PORT=8080 scribe:latest
    

    So this is same command I use to run my own scribe instance!

Required stuff to change/keep: LUCKY_ENV=production: Is to tell scribe we will use it for public use so it will display our domain name instead of localhost APP_DOMAIN=scribe.esmailelbob.xyz: Change scribe.esmailelbob.xyz to your domain name SCRIBE_HOST=0.0.0.0: After trial and error I found I must keep this as is or othewise it will wail PORT=8080: Same, after trial and error I found when you change this, the image breaks DATABASE_URL=postgres://does@not/matter: Same, scribe does not use database but we need to provide (fake one) to let docker start so do not change this either SECRETKEYBASE=“sqnmkgmmmwrmghowoohscvmafxrauufd”: *This is needed to change, It's a * 32 bits string a password like code. For me I came up with this random words! scribe:latest: Docker image we built, If you changed tag while you build your docker image then replace latest with tag you used

Optional stuff to change: 127.0.0.1:7541:8080: Is IP that docker will use to run scribe on, Use local host to prevent people access to our website via VPS's IP and :6666 is our port number

Nginx and server block

Now after we made sure our image is running without any errors, we can move on and make our server block for nginx so it will look like:

server{
        listen [::]:80;
        listen 80;
        server_name [domain name] ;
        location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:6666/;
        }
}

server_name: Change this to match domain name of scribe include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for scribe! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

git pull

docker ps | grep scribe

docker stop [add id here]

docker build -t scribe:latest -f ./Dockerfile .

docker run -d -it --rm -p 127.0.0.1:6666:8080 -e LUCKY_ENV=production -e APP_DOMAIN=scribe.esmailelbob.xyz  -e SCRIBE_HOST=0.0.0.0 -e DATABASE_URL=postgres://does@not/matter -e SECRET_KEY_BASE="sqnmkgmmmwrgubwvoohscvmafxrauufd" -e PORT=8080 scribe:latest

What it does is: 1) Pull updates from github, 2) Search for scribe container ID and stops it, 3) Build scribe image and 4) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker

Tor network or onion router. It's a hidden network that got .onion in it's URL instead of .com for example. It's known for it's strong privacy and security. People use tor browser to access to internet without filters or blocks and today you are going to host your website on both clear net and on tor network (mirror it's called)

So before we start I will assume some things: * You are running debian 11 * You already have access to that VPS (SSH, etc) * Your Linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You know how to type on a keyboard * Have sudo access or root account * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp

Setting up tor

This step change from distro to another and OS to another but because i use debian I will explain on that!

So to install it just run:

sudo apt install -y apt-transport-https gpg

sudo echo "deb https://deb.torproject.org/torproject.org bullseye main
deb-src https://deb.torproject.org/torproject.org bullseye main" > /etc/apt/sources.list.d/tor.list

sudo curl -s https://deb.torproject.org/torproject.org/A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc | gpg --import gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add -

sudo apt update

sudo apt install tor deb.torproject.org-keyring

This adds repos of tor in our source list and download PGP key and installs tor

Config tor

We need to edit /etc/tor/torrc:

sudo nano /etc/tor/torrc

And uncomment:

HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServicePort 80 127.0.0.1:80

Now enable/start tor service, We simply need to use systemd (soystemd):

sudo  systemctl enable --now tor 
sudo  systemctl status tor
sudo systemclt restart tor

And to get our domain name for tor we need to copy address inside /var/lib/tor/hidden_service/hostname:

sudo cat /var/lib/tor/hidden_service/hostname

NGINX

Now we having a running tor service, we need to use nginx to tell it to serve our website over tor. It's same as our normal server block except instead of listen on 80 it will listen on 127.0.0.1:80 and our server_name will be our tor domain name instead of domain we bought (.xyz for example).

This config for static website (as an example):

        server {
            listen 127.0.0.1:80 ;
            root /var/www/mainSite ;
            index index.html ;
            server_name your-onion-address.onion ;
        }

NOTE: *If you want website to auto redirect itself to it's tor version whenever someone uses tor browser and visit your website, You can add this line: add_header Onion-Location http://your-onion-address.onion$request_uri; to your clear net server block, So it would be something like:

        server {
            listen 80 ;
            root /var/www/mainSite ;
            index index.html ;
            server_name your-onion-address.onion ;

            add_header Onion-Location http://your-onion-address.onion$request_uri;
        }

#howto #selfhost