Feedback Docker Deployment

Hi Folks,

first off: Thanks for creating and more important maintaining such a great tool for the community.

I just tried to deploy a testing instance for a group in our organization (dzd-ev.de)
Modern production deployment happens via containers most of the time. Therefor it is really great that you provide the community with a docker container.
With having some experience in software development and IT operation under my belt I will be so cocky and just leave you some feedback here (without knowing anything about your processes and reasoning. maybe it will help otherwise just delete it)

I realized that you tried to squeeze everything you need into one container. On first sight that is a great idea: Anyone who wants to deploy your application just need to start this one containers.

On second sight that approach comes with a lot of pitfalls, anti-best practices and false incentives.

A multi container approach with a docker compose file that transparently links all your modules is much more flexible, maintainable and only a tiny bit more complex to setup.

First pitfall for me was: How do i backup and restore the postgres db? It is very hard to replicate what the user/password and database is. It almost feels like as you would have tried to hide the DB from the operator :slight_smile:

There are a bunch of articles out there why you should have one container per process. But having at least the database in an extra container is a basic best practice.

Lets have a real life examples what i mean by false incentives: In our case we had a lab person, with some basic IT knowledge. It was possible for the person to spin up a openbis instance. Was it possible to create a automated backup of the state of the application and restore the data if needed? : No. A postgress data volume is not a restorable backup.

Will the database brake some day and nobody knows there was a postgres running inside the container and data will be lost, tears will be shed? Possible. Who will be blamed? Not the failing postgres DB but openbis as a software product.

My approach is: Do not hide complexity to lure spare time admins. This approach will help spare time admins in short time but possible hurt them much more in the longtime. On the other side it makes professional production deployments just harder. I want to seperate reverse proxy, database and webworkers. That is only possible at the moment if i build my own containers.

In short: Would be great if there is a container per process/module and a good docker-compose file to wrap it all up instead of the single-do-it-all container. I think this will also streamline your CI/CD Repo very much. It looks complex a.f. on first sight :smiley:

If you want to talk about CI/CD or have soem support just give me a ping at bleimehl@dzd-ev.de

Thanks for reading and have a great one
Tim

Also changing the default admin PW is slightly awkward :smiley: That should be easier or at least better documented on dockerhub:

docker exec -it openbis_container_name_or_id /home/openbis/openbis/servers/openBIS-server/jetty/bin/passwd.sh change -P admin

1 Like

also against what is documented at dockerhub:

openbis/webapp/eln-lims/ and openbis/webapp/openbis-ng-ui/ do not work out of the gate for me. Only the core UI works for me. If these would be extra containers it would be much easier to debug and logread.

1 Like

Dear Tim,

Feedback is welcomed but as you can imagine most of what you mention is something that we have discussed internally since the first version of this docker image.

Long story short:

On the current version by default the Postgres process is running on the same container. To run it on a separate container just set the environment variable FORCE_OPENBIS_POSTGRES_HOST. Such host should have available the Postgres port listening. We do this internally and all openBIS testing uses this feature.

On the next mayor openBIS version we will provider a new docker image, the new docker image will require by default orchestration.

Regarding your other struggles, I honestly don’t know how are you setting it up, the docker container uses our installer as shown on the source and we are running docker internally all the time using the defaults with all Core, ELN and Admin UI available.

I suggest you send copy of your startup scripts to our helpdesk indicating the problems you have.

Best,
Juan

Thanks Tim for your comments. I share most of your concerns, and sometimes I wished for similar things:

  • better documentation of the existing docker images
  • separate images for DB, AS and DSS (ideally with support for other databases)
  • A true continuous integration, that is the latest build from the OpenBIS CI always published on dockerhub for those of us willing to test new features before they are released (and our production servers are upgraded).

As for your problems, I think these applications should be enabled through the CORE_PLUGINS env variable,e.g

CORE_PLUGINS='enabled-modules = dropbox-monitor, dataset-uploader, dataset-file-search, xls-import, openbis-sync, eln-lims, openbis-ng-ui, search-store'

Best,
Simone