Blog

Thoughts, ideas and codes

To start online you need a domain name (esmailelbob.xyz for example) and there are many and many websites that give you ability to buy domain names. There is famous ones like godaddy and namecheap and there are others called offshore domain resellers (more on that later)

Which company you will buy domain from it depends on your goal. But most people say godaddy is bad so avoid this by all means also lately the epik hack showed how they were bad about security so avoid them too! You can give try for namecheap

But look out, Most domain registrars force you to type personal data such as address or real full name and websites like whois lookup can actually show these info easily, you can lie about data you enter but if company knew they will close your account and domain name. Here comes into play something called offshore domain registrar. They buy domain for you but under their name so if someone used whois lookup they will find data of company you bought from. In my case I use Njal.la and so far my experience is okay with them. Maybe only downside that their prices are little high (for me around 200EGP per year) but you know what, it's a little price to pay for privacy

Conclusion

If you want to hide your real info online you might give try for offshore domain registrars and if it's okay for you to show your data (or buy more for whois guard from company you buy the domain from) then you might give a try for namecheap or something similar!

#howtow #selfhost


Like my work?, support me: https://donate.esmailelbob.xyz

Mailcow is a docker container that has all of tools you need to be your own email server just like gmail but self-hosted and actually easy to manage ;)

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get * Your VPS has SMTP port unlocked

Before initial setup

We need to make sure our VPS opened SMTP port for us so do a simple test by using telnet:

sudo apt install telnet

to install telnet first! and test if we have outgoing smtp port enabled

telnet smtp.gmail.com 25

This pings google's gmail server and if we get timeout error it means that our VPS does not have 25 port open so you can ask them to open it – It depends on VPS for example vultr opens it after you verify yourself and hetnzer opens it after 1 month of using the service

And we need to make a “reverse dns” in our VPS so that websites like gmail when they try to check if our domain is spam or not, we pass the test – this changes from VPS to another so please get back to your provider

System Requirements

RAM: 800 MB – 1 GB CPU: 1GHz Disk: 5GB System type: x86 or x86_64

Changes in DNS (domain side)

We need to open 2 dns entries: 1. an A dns entry with mail.[domain name] and it's target is our VPS IP 2. a MX domain name with value of root domain and it's target it mailcow domain (ex: mail.[domain name]) And optional dns entries you can find them when you visit your mailcow domain and login as the admin and click configuration then select mail setup and from there click on “DNS” button (URL should look like, [mailcow domain]/mailbox), from there you will see list of DNSes so add them one by one in your domain.

Initial setup

We need to clone mailcow git repo and generate config so:

git clone https://github.com/mailcow/mailcow-dockerized
cd mailcow-dockerized

And generate config using FQDN (Fully qualified domain name, EX: mail.esmailelbob.xyz)

./generate_config.sh

Change config if you need to:

nano mailcow.conf

Make sure to change ports if you have other running websites/applications on same port (bind HTTPS to 127.0.0.1 on port 8443 and HTTP to 127.0.0.1 on port 8080 for example)

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

Nginx

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for mailcow:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ;

        location / {
                proxy_pass http://127.0.0.1:8080/;

                proxy_set_header Host $http_host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                client_max_body_size 0;

        }


}

server_name: Change this to match domain name of mailcow proxy_pass: the IP and port of our running docker image

After this you should be up and running for mailcow! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you need to open these ports: TCP: 587 465 25 143 993 110 995 4190 80 443

#howto #selfhost #docker


Like my work?, support me: https://donate.esmailelbob.xyz

Mastodon is Free and open source social media software and it's alternative for twitter.

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

mastodon docker-compose file

After we finish with .env.docker file, We need a docker-compose.yml file so we can start mastodon, for me I use this file:

version: '3'
services:

  db:
    restart: always
    image: postgres:14-alpine
    shm_size: 256mb
    networks:
      - internal_network
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "postgres"]
    volumes:
      - ./postgres14:/var/lib/postgresql/data
    environment:
      - "POSTGRES_HOST_AUTH_METHOD=trust"

  redis:
    restart: always
    image: redis:6-alpine
    networks:
      - internal_network
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
    volumes:
      - ./redis:/data

#  es:
#    restart: always
#    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
#    environment:
#      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
#      - "cluster.name=es-mastodon"
#      - "discovery.type=single-node"
#      - "bootstrap.memory_lock=true"
#    networks:
#      - internal_network
#    healthcheck:
#      test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
#    volumes:
#      - ./elasticsearch:/usr/share/elasticsearch/data
#    ulimits:
#      memlock:
#        soft: -1
#        hard: -1

  web:
#    build: .
    image: tootsuite/mastodon:latest
    restart: always
    env_file: .env.production
    command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
    networks:
      - external_network
      - internal_network
    healthcheck:
      test: ["CMD-SHELL", "wget -q --spider --proxy=off 127.0.0.1:3000/health || exit 1"]
    ports:
      - "127.0.0.1:3000:3000"
    depends_on:
      - db
      - redis
#      - es
    volumes:
      - ./MASTODON_DATA:/mastodon/public/system
      - ./MASTODON_DATA:/mastodon/public/assets
      - ./Mastomoji.tar.gz:/opt/mastodon/Mastomoji.tar.gz
  streaming:
#    build: .
    image: tootsuite/mastodon:latest
    restart: always
    env_file: .env.production
    command: node ./streaming
    networks:
      - external_network
      - internal_network
    healthcheck:
      test: ["CMD-SHELL", "wget -q --spider --proxy=off 127.0.0.1:4000/api/v1/streaming/health || exit 1"]
    ports:
      - "127.0.0.1:4000:4000"
    depends_on:
      - db
      - redis

  sidekiq:
#    build: .
    image: tootsuite/mastodon:latest
    restart: always
    env_file: .env.production
    command: bundle exec sidekiq
    depends_on:
      - db
      - redis
    networks:
      - external_network
      - internal_network
    volumes:
      - ./MASTODON_DATA:/mastodon/public/system
## Uncomment to enable federation with tor instances along with adding the following ENV variables
## http_proxy=http://privoxy:8118
## ALLOW_ACCESS_TO_HIDDEN_SERVICE=true
#  tor:
#    image: sirboops/tor
#    networks:
#      - external_network
#      - internal_network
#
#  privoxy:
#    image: sirboops/privoxy
#    volumes:
#      - ./priv-config:/opt/config
#    networks:
#      - external_network
#      - internal_network

networks:
  external_network:
  internal_network:
    internal: true

Optional stuff to change: ./MASTODON_DATA: This is where our posts and photos will be saved at

NOTE: Do not change ports of mastodon, I tried to change them and it did not work so if you have another website running on same port, change port for other website not mastodon.

Prepare Mastodon

Now we need to prepare .env.production so we can edit it and configure mastodon:

cp .env.production.sample .env.production

Now let's pull image from dockerhub:

docker-compose build

And correct the impression of mastodon folders:

sudo chown -R 991:991 public/system

Spin it up!

Now we are done with docker-compose file and prepared it, We need to setup mastodon, to setup mastodon database and generate secret keys and other stuff

docker-compose run --rm web bundle exec rake mastodon:setup

Answer the questions and after you done, it will show you configuration for mastodon in terminal so copy it and paste it in the .env.production file

NOTE: You might want to keep database url and port and username and password same as it's (and redis data too) while you interact with the wizard

Now it's time to run mastodon! so:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors.

Nginx

Now to the tricky part number 2! Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so we need to first make a dummy nginx website that got same domain name as we will use for mastodon then generate https certificate using certbot and after it's all setup you may now paste this to your mastodon nginx server block:

map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;
}
server {
  server_name [mastodon domain name] ;

#  ssl_protocols TLSv1.2;
#  ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
#  ssl_prefer_server_ciphers on;

  #ssl_session_cache shared:SSL:10m;

  keepalive_timeout    70;
  sendfile             on;
  client_max_body_size 80m;

  root /home/mastodon/live/public;

  gzip on;
  gzip_disable "msie6";
  gzip_vary on;
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k;
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

  add_header Strict-Transport-Security "max-age=31536000";

  location / {
    try_files $uri @proxy;
  }

  location ~ ^/(emoji|packs|system/accounts/avatars|system/media_attachments/files) {
    add_header Cache-Control "public, max-age=31536000, immutable";
    try_files $uri @proxy;
  }
  
  location /sw.js {
    add_header Cache-Control "public, max-age=0";
    try_files $uri @proxy;
  }

  location @proxy {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";
    proxy_pass_header Server;

    proxy_pass http://127.0.0.1:3000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  location /api/v1/streaming {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";

    proxy_pass http://127.0.0.1:4000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  error_page 500 501 502 503 504 /500.html;


    add_header Onion-Location http://social.lqs5fjmajyp7rvp4qvyubwofzi6d4imua7vs237rkc4m5qogitqwrgyd.onion$request_uri;
  #root /home/mastodon/live/public;
  # Useful for Let's Encrypt
  #location /.well-known/acme-challenge/ { allow all; }
  #location / { return 301 https://$host$request_uri; }



    listen [::]:443 ssl; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/[mastodon domain name]/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/[mastodon domain name]/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

server {
        listen 127.0.0.1:80 ;
        server_name social.lqs5fjmajyp7rvp4qvyubwofzi6d4imua7vs237rkc4m5qogitqwrgyd.onion ;
#  ssl_protocols TLSv1.2;
#  ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
#  ssl_prefer_server_ciphers on;
  ssl_session_cache shared:SSL:10m;

  keepalive_timeout    70;
  sendfile             on;
  client_max_body_size 80m;

  root /home/mastodon/live/public;

  gzip on;
  gzip_disable "msie6";
  gzip_vary on;
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k;
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

  add_header Strict-Transport-Security "max-age=31536000";

  location / {
    try_files $uri @proxy;
  }

  location ~ ^/(emoji|packs|system/accounts/avatars|system/media_attachments/files) {
    add_header Cache-Control "public, max-age=31536000, immutable";
    try_files $uri @proxy;
  }
  
  location /sw.js {
    add_header Cache-Control "public, max-age=0";
    try_files $uri @proxy;
  }

  location @proxy {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";
    proxy_pass_header Server;

    proxy_pass http://127.0.0.1:3000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  location /api/v1/streaming {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";

    proxy_pass http://127.0.0.1:4000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  error_page 500 501 502 503 504 /500.html;
}


server {
    if ($host = [mastodon domain name]) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


  server_name [mastodon domain name] ;

    listen [::]:80;
    listen 80;
    return 404; # managed by Certbot


}

[mastodon domain name]: Replace this with your mastodon's domain name

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

And if you get notification about migrate your database, simply run:

docker-compose run --rm web bundle exec rake db:migrate

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

To add an onion link so you offer your instance on both tor and clear net, simply open .env.production file and add this line:

ALTERNATE_DOMAINS=[YOUR ONION DOMAIN HERE OR REALLY ANY OTHER DOMAIN]

save and restart docker and it should work

#howto #selfhost #docker


Like my work?, support me: https://donate.esmailelbob.xyz

Unlike Lingva the proxy for google translate, Libre Translate uses Argos Translate which is an actual translate engine. It has it's own engine and it's own AI and API for translations so weather you are an application owner want free API or normal user want to escape the bubble of google translate, you came to the right application :)

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

System Requirements

RAM: 4GB

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

Libre translate docker-compose file

We need a docker-compose.yml file so we can start Libre translate, for me I use this file:

version: "3"

services:
  libretranslate:
    container_name: libretranslate
    #build: .
    image: libretranslate/libretranslate
    restart: unless-stopped
    ports:
      - 127.0.0.1:5550:5000
    ## Uncomment above command and define your args if necessary
    command: --frontend-language-source en --frontend-language-target ar

Optional stuff to change: 127.0.0.1:5550: The IP and port number for Libre translate (so we can use it later for nginx to reverse proxy it) command: Is configuration for Libre translate, to know more args or commands to add in libre translate, please visit: https://github.com/LibreTranslate/LibreTranslate#arguments

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

Nginx

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for Libre translate:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:5550/;
       }
}

server_name: Change this to match domain name of libre translate include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for Libre translate! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker


Like my work?, support me: https://donate.esmailelbob.xyz

Nitter lets you browser twitter with privacy in mind and with actually more features like RSS feeds. It's a proxy for twitter so you can follow the people you like on twitter without actually giving up your data

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

nitter docker-compose file

We need a docker-compose.yml file so we can start nitter, for me I use this file:

version: "3"

services:

  nitter:
    image: zedeus/nitter:latest
    container_name: nitter
    ports:
      - "127.0.0.1:4040:8080" # Replace with "8080:8080" if you don't use a reverse proxy
    volumes:
      - ./nitter.conf:/src/nitter.conf:ro
    depends_on:
      - nitter-redis
    restart: unless-stopped

  nitter-redis:
    image: redis:6-alpine
    container_name: nitter-redis
    command: redis-server --save 60 1 --loglevel warning
    volumes:
      - nitter-redis:/data
    restart: unless-stopped

volumes:
  nitter-redis:

Optional stuff to change: 127.0.0.1:4040: The IP and port number for nitter (so we can use it later for nginx to reverse proxy it) ./nitter.conf: Where and what name our config for nitter will be saved

If you want to see a real world example of nitter.conf file:

[Server]
address = "0.0.0.0"
port = 8080
https = false  # disable to enable cookies when not using https
httpMaxConnections = 100
staticDir = "./public"
title = "nitter"
hostname = "nitter.esmailelbob.xyz"

[Cache]
listMinutes = 240  # how long to cache list info (not the tweets, so keep it high)
rssMinutes = 10  # how long to cache rss queries
redisHost = "nitter-redis" # Change to "nitter-redis" if using docker-compose
redisPort = 6379
redisPassword = ""
redisConnections = 20  # connection pool size
redisMaxConnections = 30
# max, new connections are opened when none are available, but if the pool size
# goes above this, they're closed when released. don't worry about this unless
# you receive tons of requests per second

[Config]
hmacKey = "13441753" # random key for cryptographic signing of video urls
base64Media = true # use base64 encoding for proxied media urls
enableRSS = true  # set this to false to disable RSS feeds
enableDebug = false  # enable request logs and debug endpoints
proxy = ""  # http/https url, SOCKS proxies are not supported
proxyAuth = ""
tokenCount = 10
# minimum amount of usable tokens. tokens are used to authorize API requests,
# but they expire after ~1 hour, and have a limit of 187 requests.
# the limit gets reset every 15 minutes, and the pool is filled up so there's
# always at least $tokenCount usable tokens. again, only increase this if
# you receive major bursts all the time

# Change default preferences here, see src/prefs_impl.nim for a complete list
[Preferences]
theme = "Nitter"
replaceTwitter = "nitter.esmailelbob.xyz"
replaceYouTube = "invidious.esmailelbob.xyz"
replaceReddit = "libreddit.esmailelbob.xyz"
replaceInstagram = "bibliogram.esmailelbob.xyz"
proxyVideos = false
hlsPlayback = true
infiniteScroll = true

NOTE: you do not change port here in nitter's config. Docker and image inside it think of them as separate thing so port we changed in docker-compose is what is actually exposed to nginx and port in nitter's config is just a more like of a mapped port so docker can look for 8080 and assign it for our port we added in docker-compose file

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

Nginx

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for nitter:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ; ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:4040/;
       }
}

server_name: Change this to match domain name of nitter include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for nitter! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker


Like my work?, support me: https://donate.esmailelbob.xyz

Snikket is a Prosody IM server with a suite of plugins out of the box. It's easy to install XMPP server with plugins already enabled such as upload files, group chats and voice/video calls using TURN!

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

System Requirements

RAM: 1GB

Changes in DNS (domain side)

You need to add 3 entries. main snikket domain, share.[main snikket domain] and groups.[main snikket domain]. add them as either A entry or CNAME entry it dose not matter!

snikket docker-compose file and snikket.conf

We need a docker-compose.yml file and snikket.conf file so we can configure snikket. So we can start snikket, for me I use this file:

version: "3.3"

services:
  snikket_proxy:
    container_name: snikket-proxy
    image: snikket/snikket-web-proxy:beta
    env_file: snikket.conf
    network_mode: host
    volumes:
      - ./SNIKKET_DATA:/snikket
      - ./SNIKKET_DATA:/var/www/html/.well-known/acme-challenge
    restart: "unless-stopped"
  snikket_certs:
    container_name: snikket-certs
    image: snikket/snikket-cert-manager:beta
    env_file: snikket.conf
    volumes:
      - ./SNIKKET_DATA:/snikket
      - ./SNIKKET_DATA:/var/www/.well-known/acme-challenge
    restart: "unless-stopped"
  snikket_portal:
    container_name: snikket-portal
    image: snikket/snikket-web-portal:beta
    network_mode: host
    env_file: snikket.conf
    restart: "unless-stopped"

  snikket_server:
    container_name: snikket
    image: snikket/snikket-server:beta
    network_mode: host
    volumes:
      - ./SNIKKET_DATA:/snikket
    env_file: snikket.conf
    restart: "unless-stopped"

volumes:
  acme_challenges:
  snikket_data:

Optional stuff to change: ./SNIKKET_DATA: Here will be there all of our snikket data, users, avatars and chat and so on.

Snikket.conf file:

# The primary domain of your Snikket instance
SNIKKET_DOMAIN=esmailelbob.xyz

# An email address where the admin can be contacted
# (also used to register your Let's Encrypt account to obtain certificates)
SNIKKET_ADMIN_EMAIL=esmail@esmailelbob.xyz

# TURNSERVER PORTS
SNIKKET_TWEAK_TURNSERVER_MIN_PORT=49152
SNIKKET_TWEAK_TURNSERVER_MAX_PORT=65535

# SNIKKET PORTS
SNIKKET_TWEAK_HTTP_PORT=5080
SNIKKET_TWEAK_HTTPS_PORT=5443

SNIKKET_DOMAIN: Your XMPP domain, EX: xmpp.esmailelbob.xyz. But in my case I wanted to make it root domain (esmailelbob.xyz) so I had to sacrifice and use lang.esmailelbob.xyz as my main domain :) SNIKKETADMINEMAIL: If you are going to use this for personal use, it does not matter to write real email. Snikket sends a welcome message for registered users and tell them if you want to contact admin use this email SNIKKETTWEAKTURNSERVERMINPORT and SNIKKETTWEAKTURNSERVERMAXPORT: These are the ports for TURN servers for making video/voice calls SNIKKETTWEAKHTTP_PORT and SNIKKETTWEAKHTTPS_PORT: These are the ports for http and https for snikket, we will use HTTP port for nginx anyways so it does not really matter

Spin it up!

Now here we arrived for the tricky part...Because snikket is easy to use, it leaves us without so much customization in hand so if you are like me running snikket behind reverse proxy and will use certbot to issue certificates then you need to first create a dummy website – *create a dummy website block that listens on port 80 and server any static html page* – using same domain you used for snikket and issue certificate by certbot, why? because snikket has built in certbot in docker and it fails to look up your website IF your website does not have https (I know it's confusing, you need certbot to make certbot work or in other words you need https so you can issue https) so yes first create any dummy nginx block for same domain(s) (for all of three snikket domains, as we need 3 sub domains, share.[snikket domain], groups.[snikket domain] and [main snikket domain]) as snikket and run certbot on host/VPS not inside docker (as you do normal with any other domain) and issue the certificates then in nginx prepare the server blocks for snikket and last thing we need is to run snikket's docker compose.

Nginx and reverse proxy

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for snikket:

server {
  # Accept HTTP connections
  listen 80;
  listen [::]:80;

  server_name [snikket domain];
  server_name groups.[snikket domain];
  server_name share.[snikket domain];

  location / {
      proxy_pass http://localhost:5080/;
      proxy_set_header      Host              $host;
      proxy_set_header      X-Forwarded-For   $proxy_add_x_forwarded_for;

      # A bit of headroom over the 16MB accepted by Prosody.
      client_max_body_size 20M;
  }
}

server {
  # Accept HTTPS connections
  listen [::]:443 ssl ;
  listen 443 ssl;
  ssl_certificate /etc/letsencrypt/live/[snikket domain].xyz/fullchain.pem; # managed by Certbot
  ssl_certificate_key /etc/letsencrypt/live/[snikket domain].xyz/privkey.pem; # managed by Certbot

  server_name [snikket domain];

  location / {
      proxy_pass https://localhost:5443/;
      proxy_set_header      Host              $host;
      proxy_set_header      X-Forwarded-For   $proxy_add_x_forwarded_for;
      # REMOVE THIS IF YOU CHANGE `localhost` TO ANYTHING ELSE ABOVE
      proxy_ssl_verify      off;
      proxy_set_header      X-Forwarded-Proto https;
      proxy_ssl_server_name on;

      # A bit of headroom over the 16MB accepted by Prosody.
      client_max_body_size 20M;
  }
  
  # Uncommnet this if you are like me want to sacrifice your root domain for snikket and use another sub domain for your website and other stuff
  #return 301 https://lang.esmailelbob.xyz; 
}

server {
  # Accept HTTPS connections
  listen [::]:443 ssl ;
  listen 443 ssl;
  ssl_certificate /etc/letsencrypt/live/groups.[snikket domain]/fullchain.pem; # managed by Certbot
  ssl_certificate_key /etc/letsencrypt/live/groups.[snikket domain]/privkey.pem; # managed by Certbot

  server_name groups.[snikket domain];

  location / {
      proxy_pass https://localhost:5443/;
      proxy_set_header      Host              $host;
      proxy_set_header      X-Forwarded-For   $proxy_add_x_forwarded_for;
      # REMOVE THIS IF YOU CHANGE `localhost` TO ANYTHING ELSE ABOVE
      proxy_ssl_verify      off;
      proxy_set_header      X-Forwarded-Proto https;
      proxy_ssl_server_name on;

      # A bit of headroom over the 16MB accepted by Prosody.
      client_max_body_size 20M;
  }
}

server {
  # Accept HTTPS connections
  listen [::]:443 ssl ;
  listen 443 ssl;
  ssl_certificate /etc/letsencrypt/live/share.[snikket domain]/fullchain.pem; # managed by Certbot
  ssl_certificate_key /etc/letsencrypt/live/share.[snikket domain]/privkey.pem; # managed by Certbot

  server_name share.[snikket domain];

  location / {
      proxy_pass https://localhost:5443/;
      proxy_set_header      Host              $host;
      proxy_set_header      X-Forwarded-For   $proxy_add_x_forwarded_for;
      # REMOVE THIS IF YOU CHANGE `localhost` TO ANYTHING ELSE ABOVE
      proxy_ssl_verify      off;
      proxy_set_header      X-Forwarded-Proto https;
      proxy_ssl_server_name on;

      # A bit of headroom over the 16MB accepted by Prosody.
      client_max_body_size 20M;
  }
}

[snikket domain]: replace this with your snikket domain name proxy_pass: the IP and port of our running docker image

I split server blocks as nginx can not match domain with certificate so yup for each sub domain you literally need to create whole new server block :)

Docker's turn

After that we need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) You need to open: TCP ports 80 443 5222 5269 5000 UDP ports 49152-65535 (this taken from my config snikket.conf – the turn server ports) TCP & UDP ports 3478 3479 5349 5350

For more info get back to: https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/firewall.md#ports

SOURCES:

For more in depth, you can read same resources I have read while trying to setup my snikket server: * https://snikket.org/service/quickstart/ * https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/reverse_proxy.md * https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/firewall.md

#howto #selfhost #docker


Like my work?, support me: https://donate.esmailelbob.xyz

Pixelfed is the open source alternative for instagram. So if you want to post your pictuers online and let strangers see them (for whatever reason) but scared from Facebook (or should I say meta?) take a copy of your pictuers forever. You can self-host your own instagram thanks to pixelfed!

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

Initial setup

Before we go on docker-compose file we need first to build the image (pixelfed does not provide already built image sadly) so before we need to clone pixelfed's repo and build the image.

*. Clone pixelfed repo:

git clone https://github.com/pixelfed/pixelfed.git
cd pixelfed
git checkout dev

*. Now it's time to build pixelfed docker image:

docker build . -t pixelfed:latest -f contrib/docker/Dockerfile.apache

*. Now copy .env.example to .env.docker (this is our config file)

cp .env.example .env.docker

This is my .env.docker file:

## Crypto and we will generate this later
APP_KEY=

## General Settings
APP_NAME="Pixelfed Prod"
APP_ENV=production
APP_DEBUG=false
APP_URL=https://pixelfed.esmailelbob.xyz
APP_DOMAIN="pixelfed.esmailelbob.xyz"
ADMIN_DOMAIN="pixelfed.esmailelbob.xyz"
SESSION_DOMAIN="pixelfed.esmailelbob.xyz"

## !!!Make sure to enable it after you finish generating keys and migrating!!!
#ENABLE_CONFIG_CACHE=true

OPEN_REGISTRATION=false
ENFORCE_EMAIL_VERIFICATION=true
PF_MAX_USERS=1000
OAUTH_ENABLED=true

APP_TIMEZONE=UTC
APP_LOCALE=en

## Pixelfed Tweaks
LIMIT_ACCOUNT_SIZE=false
MAX_ACCOUNT_SIZE=1000000
MAX_PHOTO_SIZE=15000
MAX_AVATAR_SIZE=2000
MAX_CAPTION_LENGTH=500
MAX_BIO_LENGTH=125
MAX_NAME_LENGTH=30
MAX_ALBUM_LENGTH=4
IMAGE_QUALITY=80
PF_OPTIMIZE_IMAGES=true
PF_OPTIMIZE_VIDEOS=true
ADMIN_ENV_EDITOR=false
ACCOUNT_DELETION=true
ACCOUNT_DELETE_AFTER=false
MAX_LINKS_PER_POST=0

## Instance
#INSTANCE_DESCRIPTION=
INSTANCE_PUBLIC_HASHTAGS=true
#INSTANCE_CONTACT_EMAIL=
INSTANCE_PUBLIC_LOCAL_TIMELINE=true
#BANNED_USERNAMES=
STORIES_ENABLED=true
RESTRICTED_INSTANCE=false

## Mail
MAIL_DRIVER=log
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_FROM_ADDRESS="pixelfed@example.com"
MAIL_FROM_NAME="Pixelfed"
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null

## Databases (MySQL)
DB_CONNECTION=mysql
DB_DATABASE=pixelfed_prod
DB_HOST=db
DB_PASSWORD=pixelfed_db_pass
DB_PORT=3306
DB_USERNAME=pixelfed
# pass the same values to the db itself
MYSQL_DATABASE=pixelfed_prod
MYSQL_PASSWORD=pixelfed_db_pass
MYSQL_RANDOM_ROOT_PASSWORD=true
MYSQL_USER=pixelfed

## Databases (Postgres)
#DB_CONNECTION=pgsql
#DB_HOST=postgres
#DB_PORT=5432
#DB_DATABASE=pixelfed
#DB_USERNAME=postgres
#DB_PASSWORD=postgres

## Cache (Redis)
REDIS_CLIENT=phpredis
REDIS_SCHEME=tcp
REDIS_HOST=redis
REDIS_PASSWORD=redis_password
REDIS_PORT=6379
REDIS_DATABASE=0

## EXPERIMENTS 
EXP_LC=false
EXP_REC=false
EXP_LOOPS=false

## ActivityPub Federation
ACTIVITY_PUB=true
AP_REMOTE_FOLLOW=true
AP_SHAREDINBOX=true
AP_INBOX=true
AP_OUTBOX=true
ATOM_FEEDS=true
NODEINFO=true
WEBFINGER=true

## S3
FILESYSTEM_DRIVER=local
FILESYSTEM_CLOUD=s3
PF_ENABLE_CLOUD=false
#AWS_ACCESS_KEY_ID=
#AWS_SECRET_ACCESS_KEY=
#AWS_DEFAULT_REGION=
#AWS_BUCKET=
#AWS_URL=
#AWS_ENDPOINT=
#AWS_USE_PATH_STYLE_ENDPOINT=false

## Horizon
HORIZON_DARKMODE=true

## COSTAR - Confirm Object Sentiment Transform and Reduce
PF_COSTAR_ENABLED=false

# Media
MEDIA_EXIF_DATABASE=false

## Logging
LOG_CHANNEL=stderr

## Image
IMAGE_DRIVER=imagick

## Broadcasting
BROADCAST_DRIVER=log  # log driver for local development

## Cache
CACHE_DRIVER=redis

## Purify
RESTRICT_HTML_TYPES=true

## Queue
QUEUE_DRIVER=redis

## Session
SESSION_DRIVER=redis

## Trusted Proxy
TRUST_PROXIES="*"

## Passport
#PASSPORT_PRIVATE_KEY=
#PASSPORT_PUBLIC_KEY=

And edit .env.docker file as needed but make sure to change APP_DOMAIN, ADMIN_DOMAIN and SESSION_DOMAIN to your pixelfed domain (ex: pixelfed.esmailelbob.xyz) and change APP_URL to https:// + your pixelfed domain (ex: https://pixelfed.esmailelbob.xyz)

pixelfed docker-compose file

After we finish with .env.docker file, We need a docker-compose.yml file so we can start pixelfed, for me I use this file:

---
version: '3'

# In order to set configuration, please use a .env file in
# your compose project directory (the same directory as your
# docker-compose.yml), and set database options, application
# name, key, and other settings there.
# A list of available settings is available in .env.example
#
# The services should scale properly across a swarm cluster
# if the volumes are properly shared between cluster members.

services:
## App and Worker
  app:
    # Comment to use dockerhub image
    build:
      context: .
      dockerfile: contrib/docker/Dockerfile.apache
    image: pixelfed:latest
    restart: unless-stopped
    env_file:
      - .env.docker
    volumes:
     # - app-storage:/var/www/storage
      - "./PIXELFED_DATA:/var/www/storage"
      - app-bootstrap:/var/www/bootstrap
      - "./.env.docker:/var/www/.env"
    networks:
      - external
      - internal
    ports:
      - "127.0.0.1:8282:80"
    depends_on:
      - db
      - redis

  worker:
    build:
      context: .
      dockerfile: contrib/docker/Dockerfile.apache
    image: pixelfed:latest
    restart: unless-stopped
    env_file:
      - .env.docker
    volumes:
     # - app-storage:/var/www/storage
      - "./PIXELFED_DATA:/var/www/storage"
      - app-bootstrap:/var/www/bootstrap
    networks:
      - external
      - internal
    command: gosu www-data php artisan horizon
    depends_on:
      - db
      - redis

## DB and Cache
  db:
    image: mysql:8.0
    restart: unless-stopped
    networks:
      - internal
    command: --default-authentication-plugin=mysql_native_password
    env_file:
      - .env.docker
    volumes:
      - "db-data:/var/lib/mysql"

  redis:
    image: redis:5-alpine
    restart: unless-stopped
    env_file:
      - .env.docker
    volumes:
      - "redis-data:/data"
    networks:
      - internal

volumes:
  db-data:
  redis-data:
  app-storage:
  app-bootstrap:

networks:
  internal:
    internal: true
  external:
    driver: bridge

Optional stuff to change: ./PIXELFED_DATA: This is where our photos will be saved at 127.0.0.1:8282: Change this if you want to change port of pixelfed

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors.

Then we need to run couple of commands to generate keys (so we can edit pixelfed from gui later) and to create database! So first we will generate keys run:

docker-compose exec app php artisan key:generate
docker-compose exec app php artisan passport:keys
docker-compose exec app php artisan route:cache
docker-compose exec app php artisan cache:clear

And make sure we have the keys:

cat .env.docker | grep APP_KEY

Now time for the db (database):

docker-compose restart app
docker-compose exec app php artisan config:cache
docker-compose exec app php artisan migrate

NOTE: If migration fails, take it down (by docker-compose down) and then start it again (by docker-compose up -d) and re-run the migration command (docker-compose exec app php artisan migrate)

Last but not least, it's time to create an account:

docker-compose exec app php artisan user:create

NOTE: to enabe edit settings of pixelfed from pixelfed website add this to .env.docker:

ENABLE_CONFIG_CACHE=true

Nginx

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for pixelfed:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:8282/;
       }
}

server_name: Change this to match domain name of pixelfed include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for pixelfed! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

git pull
docker-compose down
docker build . -t pixelfed:latest -f contrib/docker/Dockerfile.apache
docker-compose up -d

What it does is: 1) Pull updates from github, 2) Stops the container, 3) Build pixelfed image and 4) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker


Like my work?, support me: https://donate.esmailelbob.xyz

Gitea is like GitHub and Gitlab. A git backend with a web gui with a eye candy (*Just like GitHub). To be honest I have nothing to say as I think it's self explained.

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

System Requirements

CPU: 2 CPU cores RAM: 1GB

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

Initial setup

We need a docker-compose.yml file but first we need to do some initial setup first so: . First we need to create a new user account on our VPS (host machine*) and we will name that user git and Add a password for it:

sudo useradd -m git
sudo passwd git
*. Second we login using the git user:
```bash
su git

And we now need to run these commands (save their output for later):

echo $(id -u)
echo $(id -g)

Now get back (press ctrl+d)

gitea docker-compose file

Now we prepare our docker-compose.yml file:

version: "3"

networks:
    gitea:
        external: false

services:
  server:
    image: gitea/gitea
    container_name: gitea
    environment:
        - USER_UID=1001 # Enter the UID found from previous command output
        - USER_GID=1001 # Enter the GID found from previous command output
        - GITEA__database__DB_TYPE=mysql
        - GITEA__database__HOST=db:3306
        - GITEA__database__NAME=gitea
        - GITEA__database__USER=gitea
        - GITEA__database__PASSWD=giteaaa
        - GNUPGHOME=/data/git/.gnupg/
    restart: always
    networks:
        - gitea
    volumes:
        - ./data:/data
        - /etc/timezone:/etc/timezone:ro
        - /etc/localtime:/etc/localtime:ro
        - /home/git/.ssh/:/data/git/.ssh
    ports:
        - "127.0.0.1:3330:3000"
        - "127.0.0.1:2222:22"
    depends_on:
        - db

  db:
    image: mysql
    restart: always
    environment:
        - MYSQL_ROOT_PASSWORD=gitea
        - MYSQL_USER=gitea
        - MYSQL_PASSWORD=gitea
        - MYSQL_DATABASE=gitea
    networks:
        - gitea
    volumes:
        - ./mysql:/var/lib/mysql

Required stuff to change: USER_UID: Change this to what number we got from running echo $(id -u) command as the git user USER_GID: Change this to what number we got from running echo $(id -g) command as the git user MYSQL_PASSWORD and MYSQLROOTPASSWORD: These passwords for our gitea database so you know, change them? Optional stuff to change: 127.0.0.1:3330:3000: Change this to server gitea online (using nginx later) 127.0.0.1:2222:22: This is needed if we will use ssh to clone our repos – ./data:/data: This is where our config files and other needed data will be saved for gitea – /home/git/.ssh/:/data/git/.ssh: This is for ssh, We will create a ssh key later for the git user – GNUPGHOME=/data/git/.gnupg/: This is where git will look for gpg keys to sign commits, since gitea 1.17 they changed folder, if you will start fresh install you would not need it but for someone like me I had to change where folder located

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

nginX

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for gitea:

server {
        listen [::]:80;
        listen 80;
       
                server_name [domain name] ; ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:3330/;
       }
}

server_name: Change this to match domain name of gitea include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

Configure Gitea in web gui

After we run it, visit gitea in your browser and we really do not need to change anything. You can change settings of “Administrator Account Settings”, “Server and Third-Party Service Settings” and “Email Settings” you will find these under “Optional Settings” Section at end of the page but other than that I really recommend to not change other settings like gitea base URL because after some trial and error I noticed when I change that later in app.ini file (more about that later) gitea actually work and I can clone fine.

So after we done we need to edit app.ini (If you used my docker file, it should be located in: data/gitea/conf/app.ini) and change DOMAIN and SSH_DOMAIN to our gitea domain name, for my case it was git.esmailelbob.xyz and change ROOT_URL to be “https://” + our gitea domain name so it would look like https://git.esmailelbob.xyz/ in my case. Now restart docker-compose (docker-compose down; docker-compose up -d) and you are good to go :)

After this you should be up and running for gitea! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

Now we enable SSH for our gitea instance – This is Optional

If you want to enable SSH clone. It's easy to enable for gitea docker.

Host machine, VPS or server

We need to do all of these steps on our VPS side.

. Make sure we already mapped ssh port in our docker compose file (If you follow along, you already done this*)

ports:
     [...]
  - "127.0.0.1:2222:22"

. Make sure we already added UID and GID of git user in our docker compose file (If you follow along, you already done this*)

environment:
  - USER_UID=1000 # this is for example, please change it
  - USER_GID=1000

*. We need to mount .ssh of git user inside docker-compose file so this ensure that both git user on our Host or VPS and git user inside docker-compose both allow access of our SSH keys (If you follow along, you already done this)

volumes:
  - /home/git/.ssh/:/data/git/.ssh

. Now generate SSH key for gitea itself. This key pair will be used to authenticate the git user on the host to the container (no need to switch to git for this*)

sudo -u git ssh-keygen -t rsa -b 4096 -C "Gitea Host Key"

*. Now copy SSH key we created for git user to authorized_keys ( Again no need to change to user to use git on host or VPS ) so both git user and git user inside docker of gitea get same copy of authorized ssh keys:

sudo -u git cat /home/git/.ssh/id_rsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys

sudo -u git chmod 600 /home/git/.ssh/authorized_keys

*. Now we can view /home/git/.ssh/authorized_keys (cat /home/git/.ssh/authorized_keys) and make sure it looks like:

# SSH pubkey from git user
ssh-rsa <Gitea Host Key>

*. Now we need to create a executable script:

cat <<"EOF" | sudo tee /usr/local/bin/gitea

#!/bin/sh

ssh -p 2222 -o StrictHostKeyChecking=no git@127.0.0.1 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"

EOF

sudo chmod +x /usr/local/bin/gitea

This script forward commands from git host to gitea container

Now get back to our client or desktop

We need to do all of these steps on our desktop or own PC side.

So now on PC simply create a ssh key:

ssh-keygen -t ECDSA

And go to ~/.ssh/` and get the public key of your SSH key and login to your gitea and in settings ([gitea domain url]/user/settings) import your SSH key. And create a repo for test and try to clone it over SSH :)

For more info please get back to gitea docs at https://docs.gitea.io/en-us/install-with-docker/#sshing-shim-with-authorized_keys

Now we enable GPG commit sign for our gitea instance – This is Optional

If you want to see that sweet little green lock beside your commits and to let people know that it was really you who made those changes, You need to enable GPG key signing inside gitea and it's simple!

First we need to login inside docker itself as the git user (do not mix it up with git user on our host machine) to do so just type:

docker exec -it -u git gitea bash

Now do not panic you are inside gitea docker container as the git user! we simply need to generate a gpg key pair which is simple as:

gpg --full-generate-key

Answer questions and make sure to type name and email right as we need to use them later! * If you get permissions error (gpg: WARNING: unsafe permissions on homedir '/home/path/to/user/.gnupg) you might want to try

chown -R $(whoami) data/git/.gnupg/
chmod 600 ~/.gnupg/* data/gitea/home/.gnupg/
chmod 700 ~/.gnupg data/gitea/home/.gnupg/

data/git/.gnupg/: Is where .gnupg folder saved inside docker container, If you used same setup as mine you do not have to worry but if you changed volumes you might want to search where it's saved in your case!

After we done. You can run:

gpg --list-secret-keys

To list created keys and note their Id, name and email (we need them for later)

Now, logout of container (press ctrl+d or type exit) and now edit app.ini file (data/gitea/conf/app.ini) and paste (this is my setup i use):

[repository.signing]
DEFAULT_TRUST_MODEL = collaboratorcommitter
SIGNING_KEY         = defualt
SIGNING_NAME        = gitea
SIGNING_EMAIL       = gitea@esmailelbob.xyz
INITIAL_COMMIT      = always
CRUD_ACTIONS        = always
WIKI                = always
MERGES              = always

SIGNING_KEY: Leave it as is (more on that later). SIGNING_NAME: Type same name you typed while you were creating the GPG key SIGNING_EMAIL: Type same email you typed while you were creating the GPG key

Now you need to restart docker (docker-compose down; docker-compose up -d) and go to your git domain.com/api/v1/signing-key.gpg (Ex: git.esmailelbob.xyz/api/v1/signing-key.gpg) and make sure you see a public gpg key displayed, If you see an empty page try to change SIGNING_KEY in app.ini to key's ID itself not default.

Now we need to login back in docker as git user (docker exec -it -u git gitea bash) and we need to create a .gitconfig file in data/git/.gitconfig (Again, if you followed my docker compose setup it should be in same order so do not worry but if you changed volumes then you need to search where git folder saved) and your .gitconfig file it should look like:

[user]
        email = git@esmailelbob.xyz
        name = gitea
        signingkey = 55B46434BB81637F
[commit]
        gpgsign = true
[gpg]
        program = /usr/bin/gpg
[core]
        quotepath = false
        commitGraph = true
[gc]
        writeCommitGraph = true
[receive]
        advertisePushOptions = true
        procReceiveRefs = refs/for

What need to change are: email: Type email that you typed while creating gpg. Should match your GPG key we created name: Type name that you typed while creating gpg. Should match your GPG key we created signingkey: Your GPG key ID that we created

Now leave gitea container bash and restart docker (docker-compose down; docker-compose up -d) and now give it a try :). Make a test repo and try to commit stuff and you should see the magic green lock

NOTE 1: After you make key, export it's public key and add it inside your gitea account in settings ([gitea domain url]/user/settings) .

NOTE 2: If you want to for example only sign commits if user has gpg key in their account or never commits at all you can do that, please get back to gitea docs to see the other options but for me I wanted it to ALWAYS sign commits

For more info please get back to gitea docs at https://docs.gitea.io/en-us/signing/

NOTE 3: It's not related to gitea but it's related to gpg and git. On your PC if you want to enable gpg sign too: *. Generate gpg key (gpg --full-generate-key) and grab it's ID, name and email for later *. Edit .gitconfig (~/.gitconfig) file on your own desktop (not VPS/Host machine) to make it look like:

[filter "lfs"]
        clean = git-lfs clean -- %f
        smudge = git-lfs smudge -- %f
        process = git-lfs filter-process
        required = true
[user]
        name = Esmail EL BoB
        email = esmail@esmailelbob.xyz
        signingkey = 4984C22F0C5CACDE73B05243F44C953A3C7A4E16
[http]
        sslBackend = openssl
[commit]
        gpgsign = true

And change name, email and signingkey to same info you added while you were creating gpg key . List GPG keys installed in your Desktop (gpg --list-secret-keys) and view public key of our GPG key we just created (gpg --export --armor [key-id]) and Add GPG public key to your gitea account via settings ([gitea domain url]/user/settings*).

Now you would be able to push commits and sign them automatically to gitea, github or any git really

Add more theme options in gitea – This is Optional

If you want to add more themes for gitea docker. We need to know what is our CustomPath and if you follow along it should be data/gitea. So to add themes we need to get .css file and to tell app.ini (config file of gitea) what themes to enable so later we can select them from gitea webgui in settings. So first let's created needed folders. Go to data/gitea and create a new folder called public and cd into it and create new folder called css so order would look like: data/gitea/public/css

cd data/gitea
mkdir public
cd public
mkdir css
cd css

Now It's time for .css files, To do so we can search online for gitea themes or visit: https://gitea.com/gitea/awesome-gitea#user-content-themes to get some files for test.

We should be already in css folder so select .css file you want and download it using wget:

wget [theme url]

Now it's time to edit app.ini to tell it to enable the theme(s) we downloaded in css folder! so open app.ini (should be in data/gitea/conf/app.ini) and paste:

[ui]
DEFAULT_THEME = gitea
THEMES = gitea,arc-green,plex,aquamarine,dark,dracula,hotline,organizr,space-gray,hotpink,onedark,overseerr,nord,earl-grey,github,github-dark

DEFAULT_THEME: Is default theme for all users and it's okay to leave as is really THEMES: here list all of our downloaded themes, To know theme name you need to look at css file so it look like: theme-github.css here our theme name is github

Now restart docker (docker-compose down; docker-compose up -d) and go to gitea and edit your settings (click on your profile picture from upper-right > click settings > select appearance from top bar – url should look like: [gitea domain]/user/settings/appearance) and select the theme you want and click “Update Theme” and you should be good to go :) – If you see nothing changed it means you either downloaded them in wrong folder or typed it's name wrong in app.ini so re-check it!

For more info please get back to gitea docs: https://docs.gitea.io/en-us/install-with-docker/#customization

#howto #selfhost #docker


Like my work?, support me: https://donate.esmailelbob.xyz

Nextcloud is a self-hosted website that you can backup or sync your files on. Just like google drive or Microsoft one drive EXCEPT nextcloud got more tools or “APPs” so you can integrate your own jitsi like server so talk video with people using your nextcloud instance or backup your files with true end-to-end encryption (*It's easy to enable in nextcloud).

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

System Requirements

RAM: 128 – 512 MB

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

nextcloud docker-compose file

We need a docker-compose.yml file so we can start nextcloud, for me I use this file:

version: '2'

volumes:
  nextcloud:
  db:

services:
  db:
    image: mariadb:10.5
    restart: always
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
    volumes:
      - db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=NC
      - MYSQL_PASSWORD=NC
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud

  app:
    image: nextcloud
    restart: always
    container_name: nextcloud
    ports:
      - 127.0.0.1:8585:80
    links:
      - db
    volumes:
      - nextcloud:/var/www/html
      - ./NEXTCLOUD_DATA/:/var/www/html/data
      - ./config:/var/www/html/config
      - ./php.ini:/usr/local/etc/php/php.ini
    environment:
      - MYSQL_PASSWORD=NC
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud
      - MYSQL_HOST=db

Required stuff to change: MYSQL_PASSWORD: Change this to any other password MYSQLROOTPASSWORD: Same as MYSQLPASSWORD Optional stuff to change: **./NEXTCLOUDDATA/**: This is where our uploaded files in nextcloud will be saved in 127.0.0.1:8585:8080: Change this if you want to change port of nextcloud

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

nginX

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for nextcloud:

server {
        listen [::]:80;
        listen 80;
       
        server_name [nextcloud domain name] ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:8585/;
       }
}

server_name: Change this to match domain name of nextcloud include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

Configure Nextcloud

After we run it, visit nextcloud in your browser and really all we need to change is type our account data and if you are going to use this for personal or small usage you can select SQLite 3 setup

After this you should be up and running for nextcloud! :) just do not forget to run certbot --nginx to make it secure with https://

NOTE: to edit config in nextcloud, you would fine config.php in data/config.php

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

Setup Collabora online to edit documents online – Optional

So as a nextcloud user you now want to use it as much as you can in anyways! So now I will show you how to enable collabora online to edit right inside nextcloud!

Get collabora running

It's simple, open nextcloud docker compose file and add this blog to it:

  collabora:
    image: collabora/code:latest
    container_name: collabora
    restart: unless-stopped
    cap_add:
     - MKNOD
    ports:
      - 127.0.0.1:9980:9980
    environment:
      - domain=cloud.esmailelbob.xyz
      - username=username
      - password=password
      - extra_params=--o:ssl.enable=true --o:ssl.termination=true
      - dictionaries=en_US ar_EG

username & password: Enable this as sort of auth or login when you use your collabora server so make sure to change this domain: replace this with your nextcloud's domain name dictionaries: This is the dictionary. It tells collabora about in what language we will write or use it for so in my case it would be Arabic and English. This is Optional so you can delete this if you are not sure and it would load all dictionaries for all languages or define more than language to load up their dictionary 127.0.0.1:9980: On what port collabora will work And simple docker restart will download collabora image and set you up and running

Nginx proxy block

Now, this block is not a server block. it's a proxy block as you see so simply put this inside an exsiting block (so let's say run both nextcloud and collabora under same URL) or create a new sub domain and add it inside it (office.esmailelbob.xyz for example)

 # static files
 location ^~ /browser {
   proxy_pass https://127.0.0.1:9980;
   proxy_set_header Host $http_host;
 }


 # WOPI discovery URL
 location ^~ /hosting/discovery {
   proxy_pass https://127.0.0.1:9980;
   proxy_set_header Host $http_host;
 }


 # Capabilities
 location ^~ /hosting/capabilities {
   proxy_pass https://127.0.0.1:9980;
   proxy_set_header Host $http_host;
 }


 # main websocket
 location ~ ^/cool/(.*)/ws$ {
   proxy_pass https://127.0.0.1:9980;
   proxy_set_header Upgrade $http_upgrade;
   proxy_set_header Connection "Upgrade";
   proxy_set_header Host $http_host;
   proxy_read_timeout 36000s;
 }


 # download, presentation and image upload
 location ~ ^/(c|l)ool {
   proxy_pass https://127.0.0.1:9980;
   proxy_set_header Host $http_host;
 }


 # Admin Console websocket
 location ^~ /cool/adminws {
   proxy_pass https://127.0.0.1:9980;
   proxy_set_header Upgrade $http_upgrade;
   proxy_set_header Connection "Upgrade";
   proxy_set_header Host $http_host;
   proxy_read_timeout 36000s;
 }

Of course do not forget to update port number in here IF you changed it inside docker compose, but if you just follow along, this step is not needed

Nextcloud configuration

Now we go to Nextcloud, Install new app Nextcloud Office and go to settings, Under Administration You will find Office click on it and type your collabora's URL and click save and it should say connected :)

NOTE: to write your server URL (and you have enabled username and password option), Its format will look like: username:password@URL of collabora so type this in server URL inside nextcloud office settings

Change Background jobs from AJAX to Cron – Optional

AJAX is good but it's not really reliable and I noticed problems with it, So to change it to cron job (which is far better). Simple add this command as a cron job:

docker exec -u www-data nextcloud php cron.php

nextcloud: this the name or id of nextcloud container, If you use same docker-compose as mine then it's called nextcloud *

and add this command as cron job but first open crontab to edit:

crontab -e

and paste:

*/5  *  *  *  * docker exec -u www-data nextcloud php cron.php

to run this every 5m

#howto #selfhost #docker


Like my work?, support me: https://donate.esmailelbob.xyz

searX (or as people call it, search) Is a meta search engine, Means that searX takes search results from websites like duckduckgo, startpage and google and display them in searx so none of these websites can log your IP or your search query

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

System Requirements

RAM: 512 MB

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

searx docker-compose file

We need a docker-compose.yml file so we can start searx, for me I use this file:

version: '3.7'

services:

  searx:
    image: searx/searx:latest
    container_name: searx
    restart: unless-stopped
    ports:
      - '127.0.0.1:8787:8080'
    volumes:
      - './data/searx:/etc/searx'
    environment:
      - BASE_URL=https://searx.esmailelbob.xyz/
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - SETGID
      - SETUID
      - DAC_OVERRIDE

Required stuff to change: BASE_URL: Change this to your own domain name Optional stuff to change: './data/searx:: This is where our config files and other needed data will be stored for searx. For me I left it inside searx root folder 127.0.0.1:8787:8080: Change this if you want to change port of searx

NOTE: Your config file is saved in data/searx/settings.yml so open it up and change settings you need to change! nano data/searx/settings.yml

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

nginX

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for searx:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:8787/;
       }
}

server_name: Change this to match domain name of searx include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for searx! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker


Like my work?, support me: https://donate.esmailelbob.xyz