Blog

Thoughts, ideas and codes

🐘 :02:

If you find this post useful, help me keep running this blog and send me a tip: https://donate.esmailelbob.xyz

So because most if not all companies write their own blogs about their stand with Ukraine. I said why not me too? you know everyone is taking a side weather with Ukraine or Russia.

So I'm writing this blog to say I stand with Palestine (yes, Palestine). Why you might ask? Because If you do not see, Russia is doing the same thing Israel does in Palestine...Kill innocent people and kill kids just because those people defend their land and sadly most of the world stand with Israel side (if you do, fuck you) so yes If you are going to stand against Russia because they slaughter kids and kill innocent people at least stand with Palestine too against Israel because trust me they do the same (and probably more)

Thanks, πŸ‡΅πŸ‡ΈπŸ‡΅πŸ‡Έ!

#thoughts

If you find this post useful, help me keep running this blog and send me a tip: https://donate.esmailelbob.xyz

Librarian is a front-end for Odysee, Just like invidious for youtube. So you can use librarian to watch odysee videos without being tracked or without crypto sh!t

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

librarian docker-compose file

We need a docker-compose.yml file so we can start librarian, for me I use this file:

version: '3'

services:
  librarian:
    #build: .
    image: nineteengladespool/librarian:latest
    ports:
      - 127.0.0.1:4403:3000
    volumes:
      - ./data/config.yml:/app/config.yml
    restart: unless-stopped

Optional stuff to change: 127.0.0.1:4403: The IP and port number for librarian (so we can use it later for nginx to reverse proxy it) ./data/config.yml: Where our config file for librarian will be saved at

config.yml file (just take an idea how it looks like)

api_url: https://api.na-backend.odysee.com/api/v1/proxy
auth_token: [**SECRET**]
blocked_claims: claimid,claim2
domain: https://librarian.esmailelbob.xyz
enable_live_stream: false
fiber_prefork: false
hmac_key: [**SECRET**]
image_cache: "false"
image_cache_dir: /var/cache/librarian
instance_privacy:
  data_collected_device: true
  data_collected_diagnostic_only: false
  data_collected_ip: true
  data_collected_url: true
  data_not_collected: false
  instance_cloudflare: false
  instance_country: Canada
  instance_provider: Kimsufi
  message: ""
  privacy_policy: ""
port: "3000"
streaming_api_url: https://api.na-backend.odysee.com/api/v1/proxy
use_http3: false
video_streaming_url: ""

[SECRET]: are tokens that I deleted to protect myself

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

Nginx

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for librarian:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:4403/;
       }
}

server_name: Change this to match domain name of librarian include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for librarian! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker

If you find this post useful, help me keep running this blog and send me a tip: https://donate.esmailelbob.xyz

As a selfhoster, I want to achieve balance between privacy of my users and protection of my VPS. I can't use cloudflare as it invades privacy of people and in same time I want to protect my server from bad people so today you will learn how to protect your VPS from bad actors and in same time without invade privacy of your users.

To be honest most VPSes now days offer free ddos protection out of the box so unless you asked VPS and they said no we do not support it, then you may continue to read this article

But there are some drawbacks, like: 1. It's not so good, in fact it just throttles connections of connected peers to your website 2. It throttles all of websites as default and if you want to allow more quota for website you need to figure it our on site by site basics as there is no template for all websites. for example in my case it was invidious, I wanted to allow more connections so videoed does not hang while users watch so yup! 3. Again it's not perfect, so if someone with multiple PCs tried to bring your site down, nginx will not help you :)

but good side is, it's simple and does not invade users' privacy so yup it depends on your case or use case for me I use this method until crowedsec (an open source cloudflare like application) implement proper nginx support.

So without so much talk, I will of course assume you have already installed nginx and know how to deal with it so here is our nginx.conf (located in: /etc/nginx/nginx.conf):

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

# DoS
#worker_processes  4;
worker_priority -5;
timer_resolution 100ms;
worker_rlimit_nofile 100000;


events {
    #worker_connections 768;
    #multi_accept on;
    worker_connections  1024;
    use epoll;
    # Accept as many connections as possible, after nginx gets notification about a new connection.
    multi_accept on;
}

http {
        server_names_hash_bucket_size  128;

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        types_hash_max_size 2048;
        server_tokens off;

        # server_names_hash_bucket_size 64;
        server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        # gzip on;

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        # include /etc/nginx/dos.conf;
        include /etc/nginx/sites-enabled/*;

        ##
        # DoS
        ##
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log /var/log/nginx/access.log  main buffer=16k;
    access_log off;
    # Timeouts, do not keep connections open longer then necessary to reduce
    # resource usage and deny Slowloris type attacks.

    # reset timed out connections freeing ram
    reset_timedout_connection on;
    # maximum time between packets the client can pause when sending nginx any data
    client_body_timeout 10s;
    # maximum time the client has to send the entire header to nginx
    client_header_timeout 10s;
    # timeout which a single keep-alive client connection will stay open
    keepalive_timeout 65s;
    # maximum time between packets nginx is allowed to pause when sending the client data
    send_timeout 10s;

    # number of requests per connection, does not affect SPDY
    keepalive_requests 100; 
  
    # buffers

    fastcgi_buffer_size 128k;
    fastcgi_buffers 256 16k;
    fastcgi_busy_buffers_size 256k;
    fastcgi_temp_file_write_size 256k;

    proxy_buffer_size   128k; 
    proxy_buffers   4 256k;
    proxy_busy_buffers_size   256k;

    fastcgi_read_timeout 150;

    tcp_nodelay on;

    #postpone_output 0;

    gzip on;
    gzip_vary on;
    gzip_comp_level 2;
    gzip_min_length 1000;
    gzip_proxied expired no-cache no-store private auth;
    gzip_types text/plain application/json text/xml application/xml;
    gzip_disable "msie6";

    client_max_body_size 20m;

    # fastcgi cache, caching request without session variable initialized by session_start()
    fastcgi_cache_path /var/cache/nginx/fastcgi_cache levels=1:2 keys_zone=fastcgi_cache:16m max_size=256m inactive=1d;
    fastcgi_temp_path /var/cache/nginx/fastcgi_temp 1 2;

    # DDoS Mitigation 
    limit_conn_zone $binary_remote_addr zone=perip:10m;
    limit_conn perip 100;

    limit_req_zone $binary_remote_addr zone=engine:10m rate=2r/s;
    limit_req_zone $binary_remote_addr zone=static:10m rate=100r/s;


    client_body_buffer_size 200K;
    client_header_buffer_size 2k;
    large_client_header_buffers 4 8k;
}

Feel free to adjust settings on your needs and as I said if you want to allow certain website more connections or more upload size (nextcloud for example), you need to add it site by site by editing their config

For more info you can get back to: https://gist.github.com/igortik/0130e69a163d14658ef3d013890c8395 and https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/

#howto #selfhost #nginx

If you find this post useful, help me keep running this blog and send me a tip: https://donate.esmailelbob.xyz

VPS or Virtual Private Server. It's a computer or server you rent from big company like digitalocean for amount of money monthly (or yearly or hourly even) so you can host your own website or selfhost some open source projects like invidious for example. You really can't fully trust them as in the end it's someone else's computer and if you really want to achieve best privacy, You might look for selfhost at home.

Privacy and security depend on company also your use case depend on company too. There are some well known players like digital ocean, Vultr and Hetnzer (I use the latter). Again it depend on your use case and money you have. Most of big companies will not risk their reputation to get into your little VPS so do not worry about that. But you can setup an encrypted partition with luks maybe but again it's not bulletproof so you have to trust the company you rent from.

So to choose a VPS first select a budget and your use case, for example some VPSes does not allow hosting tor exit nodes or does not open port 25 (smtp port for emails) by default to prevent spam. So it's really about your use case and yup! make a list and then go up on reddit r/selfhosted and say I want VPS that allow me to do this and that and i'm sure you will find someone help you :) or try to search online and try your luck

My recommendation though right now I use both Hetnzer and Kimsufi and both are great. Also a tip about hetnzer that there is a coupon code valid for 3 months so do not forget to use it :) (you might contact their support to get it as they love to hide it)

#howtow #selfhost

If you find this post useful, help me keep running this blog and send me a tip: https://donate.esmailelbob.xyz

To start online you need a domain name (esmailelbob.xyz for example) and there are many and many websites that give you ability to buy domain names. There is famous ones like godaddy and namecheap and there are others called offshore domain resellers (more on that later)

Which company you will buy domain from it depends on your goal. But most people say godaddy is bad so avoid this by all means also lately the epik hack showed how they were bad about security so avoid them too! You can give try for namecheap

But look out, Most domain registrars force you to type personal data such as address or real full name and websites like whois lookup can actually show these info easily, you can lie about data you enter but if company knew they will close your account and domain name. Here comes into play something called offshore domain registrar. They buy domain for you but under their name so if someone used whois lookup they will find data of company you bought from. In my case I use Njal.la and so far my experience is okay with them. Maybe only downside that their prices are little high (for me around 200EGP per year) but you know what, it's a little price to pay for privacy

Conclusion

If you want to hide your real info online you might give try for offshore domain registrars and if it's okay for you to show your data (or buy more for whois guard from company you buy the domain from) then you might give a try for namecheap or something similar!

#howtow #selfhost

If you find this post useful, help me keep running this blog and send me a tip: https://donate.esmailelbob.xyz

Mailcow is a docker container that has all of tools you need to be your own email server just like gmail but self-hosted and actually easy to manage ;)

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get * Your VPS has SMTP port unlocked

Before initial setup

We need to make sure our VPS opened SMTP port for us so do a simple test by using telnet:

sudo apt install telnet

to install telnet first! and test if we have outgoing smtp port enabled

telnet smtp.gmail.com 25

This pings google's gmail server and if we get timeout error it means that our VPS does not have 25 port open so you can ask them to open it – It depends on VPS for example vultr opens it after you verify yourself and hetnzer opens it after 1 month of using the service

And we need to make a β€œreverse dns” in our VPS so that websites like gmail when they try to check if our domain is spam or not, we pass the test – this changes from VPS to another so please get back to your provider

System Requirements

RAM: 800 MB – 1 GB CPU: 1GHz Disk: 5GB System type: x86 or x86_64

Changes in DNS (domain side)

We need to open 2 dns entries: 1. an A dns entry with mail.[domain name] and it's target is our VPS IP 2. a MX domain name with value of root domain and it's target it mailcow domain (ex: mail.[domain name]) And optional dns entries you can find them when you visit your mailcow domain and login as the admin and click configuration then select mail setup and from there click on β€œDNS” button (URL should look like, [mailcow domain]/mailbox), from there you will see list of DNSes so add them one by one in your domain.

Initial setup

We need to clone mailcow git repo and generate config so:

git clone https://github.com/mailcow/mailcow-dockerized
cd mailcow-dockerized

And generate config using FQDN (Fully qualified domain name, EX: mail.esmailelbob.xyz)

./generate_config.sh

Change config if you need to:

nano mailcow.conf

Make sure to change ports if you have other running websites/applications on same port (bind HTTPS to 127.0.0.1 on port 8443 and HTTP to 127.0.0.1 on port 8080 for example)

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

Nginx

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for mailcow:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ;

        location / {
                proxy_pass http://127.0.0.1:8080/;

                proxy_set_header Host $http_host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                client_max_body_size 0;

        }


}

server_name: Change this to match domain name of mailcow proxy_pass: the IP and port of our running docker image

After this you should be up and running for mailcow! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you need to open these ports: TCP: 587 465 25 143 993 110 995 4190 80 443

#howto #selfhost #docker

If you find this post useful, help me keep running this blog and send me a tip: https://donate.esmailelbob.xyz

Mastodon is Free and open source social media software and it's alternative for twitter.

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

mastodon docker-compose file

After we finish with .env.docker file, We need a docker-compose.yml file so we can start mastodon, for me I use this file:

version: '3'
services:

  db:
    restart: always
    image: postgres:14-alpine
    shm_size: 256mb
    networks:
      - internal_network
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "postgres"]
    volumes:
      - ./postgres14:/var/lib/postgresql/data
    environment:
      - "POSTGRES_HOST_AUTH_METHOD=trust"

  redis:
    restart: always
    image: redis:6-alpine
    networks:
      - internal_network
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
    volumes:
      - ./redis:/data

#  es:
#    restart: always
#    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
#    environment:
#      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
#      - "cluster.name=es-mastodon"
#      - "discovery.type=single-node"
#      - "bootstrap.memory_lock=true"
#    networks:
#      - internal_network
#    healthcheck:
#      test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
#    volumes:
#      - ./elasticsearch:/usr/share/elasticsearch/data
#    ulimits:
#      memlock:
#        soft: -1
#        hard: -1

  web:
#    build: .
    image: tootsuite/mastodon:latest
    restart: always
    env_file: .env.production
    command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
    networks:
      - external_network
      - internal_network
    healthcheck:
      test: ["CMD-SHELL", "wget -q --spider --proxy=off 127.0.0.1:3000/health || exit 1"]
    ports:
      - "127.0.0.1:3000:3000"
    depends_on:
      - db
      - redis
#      - es
    volumes:
      - ./MASTODON_DATA:/mastodon/public/system
      - ./MASTODON_DATA:/mastodon/public/assets
      - ./Mastomoji.tar.gz:/opt/mastodon/Mastomoji.tar.gz
  streaming:
#    build: .
    image: tootsuite/mastodon:latest
    restart: always
    env_file: .env.production
    command: node ./streaming
    networks:
      - external_network
      - internal_network
    healthcheck:
      test: ["CMD-SHELL", "wget -q --spider --proxy=off 127.0.0.1:4000/api/v1/streaming/health || exit 1"]
    ports:
      - "127.0.0.1:4000:4000"
    depends_on:
      - db
      - redis

  sidekiq:
#    build: .
    image: tootsuite/mastodon:latest
    restart: always
    env_file: .env.production
    command: bundle exec sidekiq
    depends_on:
      - db
      - redis
    networks:
      - external_network
      - internal_network
    volumes:
      - ./MASTODON_DATA:/mastodon/public/system
## Uncomment to enable federation with tor instances along with adding the following ENV variables
## http_proxy=http://privoxy:8118
## ALLOW_ACCESS_TO_HIDDEN_SERVICE=true
#  tor:
#    image: sirboops/tor
#    networks:
#      - external_network
#      - internal_network
#
#  privoxy:
#    image: sirboops/privoxy
#    volumes:
#      - ./priv-config:/opt/config
#    networks:
#      - external_network
#      - internal_network

networks:
  external_network:
  internal_network:
    internal: true

Optional stuff to change: ./MASTODON_DATA: This is where our posts and photos will be saved at

NOTE: Do not change ports of mastodon, I tried to change them and it did not work so if you have another website running on same port, change port for other website not mastodon.

Prepare Mastodon

Now we need to prepare .env.production so we can edit it and configure mastodon:

cp .env.production.sample .env.production

Now let's pull image from dockerhub:

docker-compose build

And correct the impression of mastodon folders:

sudo chown -R 991:991 public/system

Spin it up!

Now we are done with docker-compose file and prepared it, We need to setup mastodon, to setup mastodon database and generate secret keys and other stuff

docker-compose run --rm web bundle exec rake mastodon:setup

Answer the questions and after you done, it will show you configuration for mastodon in terminal so copy it and paste it in the .env.production file

NOTE: You might want to keep database url and port and username and password same as it's (and redis data too) while you interact with the wizard

Now it's time to run mastodon! so:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors.

Nginx

Now to the tricky part number 2! Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so we need to first make a dummy nginx website that got same domain name as we will use for mastodon then generate https certificate using certbot and after it's all setup you may now paste this to your mastodon nginx server block:

map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;
}
server {
  server_name [mastodon domain name] ;

#  ssl_protocols TLSv1.2;
#  ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
#  ssl_prefer_server_ciphers on;

  #ssl_session_cache shared:SSL:10m;

  keepalive_timeout    70;
  sendfile             on;
  client_max_body_size 80m;

  root /home/mastodon/live/public;

  gzip on;
  gzip_disable "msie6";
  gzip_vary on;
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k;
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

  add_header Strict-Transport-Security "max-age=31536000";

  location / {
    try_files $uri @proxy;
  }

  location ~ ^/(emoji|packs|system/accounts/avatars|system/media_attachments/files) {
    add_header Cache-Control "public, max-age=31536000, immutable";
    try_files $uri @proxy;
  }
  
  location /sw.js {
    add_header Cache-Control "public, max-age=0";
    try_files $uri @proxy;
  }

  location @proxy {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";
    proxy_pass_header Server;

    proxy_pass http://127.0.0.1:3000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  location /api/v1/streaming {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";

    proxy_pass http://127.0.0.1:4000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  error_page 500 501 502 503 504 /500.html;


    add_header Onion-Location http://social.lqs5fjmajyp7rvp4qvyubwofzi6d4imua7vs237rkc4m5qogitqwrgyd.onion$request_uri;
  #root /home/mastodon/live/public;
  # Useful for Let's Encrypt
  #location /.well-known/acme-challenge/ { allow all; }
  #location / { return 301 https://$host$request_uri; }



    listen [::]:443 ssl; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/[mastodon domain name]/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/[mastodon domain name]/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

server {
        listen 127.0.0.1:80 ;
        server_name social.lqs5fjmajyp7rvp4qvyubwofzi6d4imua7vs237rkc4m5qogitqwrgyd.onion ;
#  ssl_protocols TLSv1.2;
#  ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
#  ssl_prefer_server_ciphers on;
  ssl_session_cache shared:SSL:10m;

  keepalive_timeout    70;
  sendfile             on;
  client_max_body_size 80m;

  root /home/mastodon/live/public;

  gzip on;
  gzip_disable "msie6";
  gzip_vary on;
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k;
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

  add_header Strict-Transport-Security "max-age=31536000";

  location / {
    try_files $uri @proxy;
  }

  location ~ ^/(emoji|packs|system/accounts/avatars|system/media_attachments/files) {
    add_header Cache-Control "public, max-age=31536000, immutable";
    try_files $uri @proxy;
  }
  
  location /sw.js {
    add_header Cache-Control "public, max-age=0";
    try_files $uri @proxy;
  }

  location @proxy {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";
    proxy_pass_header Server;

    proxy_pass http://127.0.0.1:3000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  location /api/v1/streaming {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";

    proxy_pass http://127.0.0.1:4000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  error_page 500 501 502 503 504 /500.html;
}


server {
    if ($host = [mastodon domain name]) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


  server_name [mastodon domain name] ;

    listen [::]:80;
    listen 80;
    return 404; # managed by Certbot


}

[mastodon domain name]: Replace this with your mastodon's domain name

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

And if you get notification about migrate your database, simply run:

docker-compose run --rm web bundle exec rake db:migrate

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

To add an onion link so you offer your instance on both tor and clear net, simply open .env.production file and add this line:

ALTERNATE_DOMAINS=[YOUR ONION DOMAIN HERE OR REALLY ANY OTHER DOMAIN]

save and restart docker and it should work

#howto #selfhost #docker

If you find this post useful, help me keep running this blog and send me a tip: https://donate.esmailelbob.xyz

Unlike Lingva the proxy for google translate, Libre Translate uses Argos Translate which is an actual translate engine. It has it's own engine and it's own AI and API for translations so weather you are an application owner want free API or normal user want to escape the bubble of google translate, you came to the right application :)

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

System Requirements

RAM: 4GB

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

Libre translate docker-compose file

We need a docker-compose.yml file so we can start Libre translate, for me I use this file:

version: "3"

services:
  libretranslate:
    container_name: libretranslate
    #build: .
    image: libretranslate/libretranslate
    restart: unless-stopped
    ports:
      - 127.0.0.1:5550:5000
    ## Uncomment above command and define your args if necessary
    command: --frontend-language-source en --frontend-language-target ar

Optional stuff to change: 127.0.0.1:5550: The IP and port number for Libre translate (so we can use it later for nginx to reverse proxy it) command: Is configuration for Libre translate, to know more args or commands to add in libre translate, please visit: https://github.com/LibreTranslate/LibreTranslate#arguments

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

Nginx

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for Libre translate:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:5550/;
       }
}

server_name: Change this to match domain name of libre translate include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for Libre translate! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker

If you find this post useful, help me keep running this blog and send me a tip: https://donate.esmailelbob.xyz

Nitter lets you browser twitter with privacy in mind and with actually more features like RSS feeds. It's a proxy for twitter so you can follow the people you like on twitter without actually giving up your data

It's easy to host it but before I start I will assume that: * You are running debian 11 * You already have a VPS: https://blog.esmailelbob.xyz/how-to-get-a-vps * Your linux distro is up-to-date (sudo apt update && sudo apt upgrade) * You have a domain name: https://blog.esmailelbob.xyz/how-to-get-a-domain-name * Have sudo access or root account * Already installed docker and docker-compose: https://blog.esmailelbob.xyz/how-to-install-docker-and-docker-compose * Already installed Nginx: https://blog.esmailelbob.xyz/how-to-install-and-configure-nginx-96yp * Already have a reverse proxy conf file: https://blog.esmailelbob.xyz/how-to-use-reverse-proxy-with-nginx * Already have certbot to issue cert: https://blog.esmailelbob.xyz/how-to-use-certbot-with-nginx-to-make-your-website-get

Changes in DNS (domain side)

You really do not need to add any dns entries except if you want to create subdomain for this container then you go in your domain's dns panel and add either CNAME entry that looks like subdomain.domain.com and make it's target the root domain domain.com or A entry with subdoamin.domain.com and make it's target the IP of your VPS

nitter docker-compose file

We need a docker-compose.yml file so we can start nitter, for me I use this file:

version: "3"

services:

  nitter:
    image: zedeus/nitter:latest
    container_name: nitter
    ports:
      - "127.0.0.1:4040:8080" # Replace with "8080:8080" if you don't use a reverse proxy
    volumes:
      - ./nitter.conf:/src/nitter.conf:ro
    depends_on:
      - nitter-redis
    restart: unless-stopped

  nitter-redis:
    image: redis:6-alpine
    container_name: nitter-redis
    command: redis-server --save 60 1 --loglevel warning
    volumes:
      - nitter-redis:/data
    restart: unless-stopped

volumes:
  nitter-redis:

Optional stuff to change: 127.0.0.1:4040: The IP and port number for nitter (so we can use it later for nginx to reverse proxy it) ./nitter.conf: Where and what name our config for nitter will be saved

If you want to see a real world example of nitter.conf file:

[Server]
address = "0.0.0.0"
port = 8080
https = false  # disable to enable cookies when not using https
httpMaxConnections = 100
staticDir = "./public"
title = "nitter"
hostname = "nitter.esmailelbob.xyz"

[Cache]
listMinutes = 240  # how long to cache list info (not the tweets, so keep it high)
rssMinutes = 10  # how long to cache rss queries
redisHost = "nitter-redis" # Change to "nitter-redis" if using docker-compose
redisPort = 6379
redisPassword = ""
redisConnections = 20  # connection pool size
redisMaxConnections = 30
# max, new connections are opened when none are available, but if the pool size
# goes above this, they're closed when released. don't worry about this unless
# you receive tons of requests per second

[Config]
hmacKey = "13441753" # random key for cryptographic signing of video urls
base64Media = true # use base64 encoding for proxied media urls
enableRSS = true  # set this to false to disable RSS feeds
enableDebug = false  # enable request logs and debug endpoints
proxy = ""  # http/https url, SOCKS proxies are not supported
proxyAuth = ""
tokenCount = 10
# minimum amount of usable tokens. tokens are used to authorize API requests,
# but they expire after ~1 hour, and have a limit of 187 requests.
# the limit gets reset every 15 minutes, and the pool is filled up so there's
# always at least $tokenCount usable tokens. again, only increase this if
# you receive major bursts all the time

# Change default preferences here, see src/prefs_impl.nim for a complete list
[Preferences]
theme = "Nitter"
replaceTwitter = "nitter.esmailelbob.xyz"
replaceYouTube = "invidious.esmailelbob.xyz"
replaceReddit = "libreddit.esmailelbob.xyz"
replaceInstagram = "bibliogram.esmailelbob.xyz"
proxyVideos = false
hlsPlayback = true
infiniteScroll = true

NOTE: you do not change port here in nitter's config. Docker and image inside it think of them as separate thing so port we changed in docker-compose is what is actually exposed to nginx and port in nitter's config is just a more like of a mapped port so docker can look for 8080 and assign it for our port we added in docker-compose file

Spin it up!

Now after we done editing, and everything is cool. We need to run our container so just run:

docker-compose up -d

the -d option does not let docker post logs of running application but if you want to see logs you can run:

sudo docker-compose logs -f -t

To check if there any weird behaviors or errors

Nginx

Now after we make sure it's running well. We need to serve it over the internet (called reverse proxy) so without much talk, here is our server block for nitter:

server {
        listen [::]:80;
        listen 80;
       
        server_name [domain name] ; ;

       location / {
               include /etc/nginx/reverse-proxy.conf;
               proxy_pass http://127.0.0.1:4040/;
       }
}

server_name: Change this to match domain name of nitter include: is our reverse proxy file proxy_pass: the IP and port of our running docker image

After this you should be up and running for nitter! :) just do not forget to run certbot --nginx to make it secure with https://

Update it

Of course after some time the image will be outdated and you need to update and what I love about docker that it's easy to update, really just to do it run:

docker-compose down && docker-compose pull && docker-compose up -d

What it does is: 1) Stops the container, 2) Pull last update (download last update) and 3) Re-run the container back!

Firewall

If you use firewall (ufw for example) you really do not need any ports other than 443 and 80 as we use nginx reverse proxy

#howto #selfhost #docker

If you find this post useful, help me keep running this blog and send me a tip: https://donate.esmailelbob.xyz