Soooo…
file_get_contents(https://invoices."REDUCTED".com/storage/3nRvxjdaX06fqFrHkPggMHo7cRViWHss/documents/YPogE8ICsdUmlDiRgCBioxGAYya3b7kYbSchwAnr.png): Failed to open stream: Operation timed out
Any clue as where I should dig?
Soooo…
file_get_contents(https://invoices."REDUCTED".com/storage/3nRvxjdaX06fqFrHkPggMHo7cRViWHss/documents/YPogE8ICsdUmlDiRgCBioxGAYya3b7kYbSchwAnr.png): Failed to open stream: Operation timed out
Any clue as where I should dig?
there is something fishy with ninja internals that triggers the firewall.
Why is this destructive?
@Rusticus,
I didn’t verify this on my setup, hence said potentially. But with the docker documentation in mind, docker-compose down
Docker-compose stop “container name”
Docker-compose rm “container name”
Docker-compose pull “container”
Docker-compose up -d to spin it up (along with the rest of the changes of any to other containers)
Sorry, typing from the phone. So typos and such.
Actually according to the documentation:
Stops containers and removes containers, networks, volumes, and images created by `up`.
By default, the only things removed are:
* Containers for services defined in the Compose file
* Networks defined in the networks section of the Compose file
* The default network, if one is used
Networks and volumes defined as external are never removed.
Anonymous volumes are not removed by default. However, as they don’t have a stable name, they will not be automatically mounted by a subsequent `up`. For data that needs to persist between updates, use explicit paths as bind mounts or named volumes.
So yes, if you really use anonymous volumes, you are right, it’s dangerous, on the other side, hard to reuse the data in all situations
Thanks for digging in. The fact that invoice ninja recommends you to do something without thinking about your environment - is scary.
On their defense, if you really use anonymous volumes, it’s your own fault/problem
I don’t see any problem with a proper setup
It’s not only about volumes tho. There are other containers in the compose file. They might not be configured to expect this. Or down time might not be expected from them and so on.
I get the idea of opensource = you are on your own most of the time. And you better be an experienced dude or dudette - but it doesn’t diminish the fact that a single docker container is recommending something that is going to affect all other containers. I’ve never seen this in documentation for other containers.
It’s like opposite of what containerization is trying to achieve =)
Guess I start to sound like drama queen. Take the mic away please lol.