V5 Docker: Bug on new user creation: Failed to load ressources and incorrect redirect


when creating a new user the page to confirm the email address and create a new password is not looking good:

I assume this is caused by not loading ressources correctly:

Another issue is, that when clicking on submit the post goes to the internal IP address and not the public IP.
E.g in my scenario I am having https://ninja.xxx.xx as URL and internally this is running under http://192.168.xx.xx:yy. The post goes to http://192.168.xx.xx:yy when it actually should go to https://ninja.xxx.xx.

Even when I am within my network this leads to not updating the password and hence the newly created user cannot login.

In the env file I have https://ninja.xxx.xx as APP_URL and also REQUIRE_HTTPS=TRUE.

Do I need to do some more config to fix this?

My environment:

  • Docker
  • INv5: v5.2.15-C56


Maybe this will help:


Thanks @hillel for the input.
I do not think this is a proxy issue though as everything else is working fine.

In the source code there script references to the IP or to http only:
<script src="http://ninja.xxx.xx//js/app.js?id=696e8203d5e8e7cf5ff5" defer></script>
<img src="http://192.168.xx.xx:yy/images/invoiceninja-black-logo-2.png" class="border-b border-gray-100 h-18 pb-4" alt="Invoice Ninja logo">

And the form action is actually set to the IP:
<form action="http://192.168.xx.xx:yy/user/confirm/-ohrnvlYKuAovaTjQNiXBoxlYPB2DSY8Cu46lKN51aXUqr0v5hdTf1b18rNfZNIdo" method="post" class="mt-6">

@david do you have any suggestions?

Are you using the docker-compose from our repo? or is this one you have rolled yourself?

The issue is how the hostname is being passed through from the front end to laravel. You may need to add headers to include the hostname as that it resolves as laravel is just seeing the IP address.

I am using docker-compose from the Github repo. Here is my docker-compose file:

version: '3.7'

    image: nginx
    restart: always
    env_file: env
      # Vhost configuration
      #- ./config/caddy/Caddyfile:/etc/caddy/Caddyfiledocker-com
      #- ./config/nginx/in-vhost.conf:/etc/nginx/conf.d/in-vhost.conf:ro
      - ./config/nginx/in-vhost.conf:/etc/nginx/conf.d/default.conf:ro
      - ./docker/app/public:/var/www/app/public:ro
      - app
    # Run webserver nginx on port 80
    # Feel free to modify depending what port is already occupied
      - "81:80"
      #- "443:443"
      - invoiceninja
      - "ninja.xxx.xx:62.50.xxx.xxx " #host and ip <= actually this is the public IP behind the URL

    image: invoiceninja/invoiceninja:5
    env_file: env
    restart: always
      - ./config/hosts:/etc/hosts:ro
      - ./docker/app/public:/var/www/app/public:rw,delegated
      - ./docker/app/storage:/var/www/app/storage:rw,delegated
      - db
      - invoiceninja
            - "ninja.xxx.xx:62.50.xxx.xxx " #host and ip <= actually this is the public IP behind the URL

    image: mysql:5
#    When running on ARM64 use MariaDB instead of MySQL
#    image: mariadb:10.4
#    For auto DB backups comment out image and use the build block below
#    build:
#      context: ./config/mysql
      - "3305:3306"
    restart: always
    env_file: env
      - ./docker/mysql/data:/var/lib/mysql:rw,delegated

      # remove comments for next 4 lines if you want auto sql backups
      #- ./docker/mysql/bak:/backups:rw
      #- ./config/mysql/backup-script:/etc/cron.daily/daily:ro
      #- ./config/mysql/backup-script:/etc/cron.weekly/weekly:ro
      #- ./config/mysql/backup-script:/etc/cron.monthly/monthly:ro
      - invoiceninja
            - "ninja.xxx.xx:62.50.xxx.xxx " #host and ip <= actually this is the public IP behind the URL


I have now changed the incoming IP to the local IP, but still the same result.
Also: Everything else seems to work fine hence I am not sure if this is a config issue. It only appears on the Email confirmation page.

Quick update: I could fix the css issue as the APP_URL was not correctly set in the env file (missed an s in https://) :slight_smile:

The logo is still refering to the internal IP and not to https://ninja.xxx.xx as well as the form action.
Could this be an issue with the hosts file?

Right now it looks like this:
192.168.xx.xx ninja.xxx.xx

I would try to reupload the logo file as it is probably referencing a bad location.

Thanks @david
I am one ste further now, as the page looks good now. I have changed the hosts in the docker-compose to in5.test and it helped me here.

Problem is still that in some places the internal IP is referenced.
This happens in the submit form behind the update button and also to load min.pdf.js:

The weird thing is, that the templating of the PDFs is working including the preview. Only in the invoice itself I cannot see the generated PDF. When sending the invoice, the PDF looks fine. (FYI:If I directly go to the IP and run IN there, everything works).

@hillel I will try the reverse proxy config you have mentionen, sorry for not considering this earlier.
Since I am having an Apache2 as a reverse proxy at the moment, I was trying it with this config:
RequestHeader set Host $host
RequestHeader set X-Real-IP $remote_addr
RequestHeader set X-Forwarded-For $proxy_add_x_forwarded_for
RequestHeader set X-Forwarded-Proto $scheme

No luck so far though and I will keep trying.

Does anyone have a working solution for Apache2 as reverse proxy for an InvoiceNinja5 Docker instance?

Have you added TRUSTED_PROXIES=* to the env file?

Thanks @david
This helped a lot. Somehow I thought it is not necessary reading through the other thread. My bad.

After this I still had the issue, that some ressources were requested with http instead of https.

I updated the nginx config to serve HTTPS. For this I created a new file in-vhost-ssl.conf with the following content:

server {
	listen 443 ssl;
	server_name localhost;
	ssl_certificate /etc/nginx/certs/nginx-selfsigned.crt;
	ssl_certificate_key /etc/nginx/certs/nginx-selfsigned.key;

	client_max_body_size 100M;

	root /var/www/app/public/;
	index index.php;

	location / {
		try_files $uri $uri/ /index.php?$query_string;

	location = /favicon.ico { access_log off; log_not_found off; }
	location = /robots.txt  { access_log off; log_not_found off; }

	location ~ \.php$ {
		fastcgi_split_path_info ^(.+\.php)(/.+)$;
		fastcgi_pass app:9000;
		fastcgi_index index.php;
		include fastcgi_params;
		fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
		fastcgi_intercept_errors off;
		fastcgi_buffer_size 16k;
		fastcgi_buffers 4 16k;

Also I uncommented port 443 in the docker-compose.yml and added the following volume to mount the certs:

 - ./config/nginx/in-vhost-ssl.conf:/etc/nginx/conf.d/default.conf:ro
      - ./docker/app/public:/var/www/app/public:ro
      - /etc/ssl/certs:/etc/nginx/certs

This can surely be optimized but it works now for me :slight_smile: Thanks again for your help and apologies for troubling you with this.