ERPNext Docker Image as an alternative to Official ERPNext Docker Repo

Was about to ask same :blush:

Did you guys mean update to kubernetes/helm solutions or Docker solutions? :slightly_smiling_face:

Hi @pipech

I was referring to Docker :blush:

For docker it’s gets constantly update.

New image will get build automatically (by Travis) every week, it will perform very basic test (It just test that image can run and when tried to access to website it return 200 HTTP status code) before push to docker hub.

Currently there’re 3 important branch on frappe. So now on latest tag we’ll have 4 tags.

v10.x.x (v10) - v10 [python2]

  • v10-py2-latest

Master (mas) - v11 [python2, python3]

  • mas-py2-latest
  • mas-py3-latest

Develop (dev) - v12 [python3]

  • dev-py3-latest

Image size are around 2.75GB

And there are 3 set-up as mention before trial, develop and production

Cheers!

Personally (I must admit a bit of bias, I’m working on bringing the official image up to par), I’m not sure what you mean. Part of containerizing is making every container do only what it needs to do, and not more than that.

Plus, the (real) reason for this, is probably because bench runs three instances of redis.

Hi @pipech

What I actually need is a bit of clarification about the tags. Is mas-py3-latest same as 11.1.11-py3 and will they both work?

They both work. Latest tag will be the same image with most recent tag version.

ie. For now mas-py3-latest tag is same image with 11.1.11-py3
if tag 11.1.12-py3 is added, mas-py3-latest will change and be same image with 11.1.12-py3
but tag 11.1.11-py3 will stay the same.

1 Like

Excellent! That was my thought too but wanted to be sure

Thanks for this great work :+1:t3:

Hi @pipech

I tried installing the latest version (master branch) last week and encountered the error below:

Traceback (most recent call last):
  File "/usr/local/bin/bench", line 11, in <module>
    load_entry_point('bench', 'console_scripts', 'bench')()
  File "/home/frappe/.bench/bench/cli.py", line 40, in cli
    bench_command()
  File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/home/frappe/.bench/bench/commands/update.py", line 128, in switch_to_branch
    switch_to_branch(branch=branch, apps=list(apps), upgrade=upgrade)
  File "/home/frappe/.bench/bench/app.py", line 402, in switch_to_branch
    switch_branch(branch, apps=apps, bench_path=bench_path, upgrade=upgrade)
  File "/home/frappe/.bench/bench/app.py", line 395, in switch_branch
    reload(utils)
UnboundLocalError: local variable 'reload' referenced before assignment

The site was however still installed successfully but it was V10 (ERPNext: v10.1.73 / Frappe Framework: v10.1.65) instead of V11!

Any idea what could cause this?

Also, I noticed that commands like bench update --patch don’t work. How do we get around issues that may occur due to differences in database schema?

Thanks

Hi, @wale

What command did you run before this error occur?


Could you copy Traceback of bench update error and place it here or open issues on github?


I’ve just pull and run pipech/erpnext-docker-debian:mas-py3-latest and it is version ERPNext: v11.1.13 (master), Frappe Framework: v11.1.13 (master).

I think you might use wrong image tags. What exactly image tag did you use?


You shouldn’t have to install ERPNext since image is come with pre-install Bench, Frappe and ERPNext.

Best regards

can i change the version to v7 and install a custom app ?

@pipech

Have you worked out a way to setup Let’s Encrypt on the production setup so that it could truely be used in a production environment?

I have been looking through your files and do not see a separate container for this?
How are you allowing a security certificate to be generated for a production site?

I found this in your fist post, but could not find (or got myself lost) in your github listing.

Update specifically I an interested in using the v10 docker files to establish a production server with security cert.

Thanks in advance…

BKM

I’m not sure if it will works with version 7, you could config Dockerfile to see if it works.

It use Traefik image to setup HTTPS.
Setting up HTTPS you’ll have to config 2 file.

  1. Change email for acme in production_setup/conf/traefik-conf/traefik.toml

  2. Change Traefik frontend rules (Domain) in production_setup/prd.yml

But you should follow the whole instruction here.
Production Setup Instruction

Best regards

Thanks for the great work you have put up on this project @pipech

I have a production version of the latest v10 running on one of my testing cloud servers and it appears to run smoothly. I only have a few questions.

  • Where is the actual MariaDB database stored? I could not find this when I went digging around in the dockerfiles.

The database file seems like it might be in a container somewhere because I do not see any place where you would have allowed it to be persistent.

  • Would it be possible to ‘bind’ the actual database file to a real hard drive location so that it can be easily backed up and move to other locations for safe keeping?

I currently use installs directly on VPS servers and I use ‘mysqldump’ command to run a complete database backup every hour. Those backups are then moved to alternate servers so that I can perform a rapid recovery in the event of a catastrophic primary server failure.

I would like to be able to do the same with a containerized ERPNext but the database file would need to be kept on the underlying server hardware to make this process easier to implement.

  • What modification would I have to make to your file for the mariadb image to get the database to bind to the local server hard drive?

This docker implementation is new to me, but it shows a great deal of promise for stability.

Thanks,

BKM

Oh yeah…

And one more question.

In the event the docker server suffers a power cycle, what is used to make sure the docker platform and the related containers are reloaded and restarted?

BKM

MariaDB database stored in docker volumes name mariadb-data-volumes which will locate in /var/lib/docker/volumes .


You could use bind-mounts .


Change this mariadb-data-volumes:/var/lib/mysql to /your/folder/location:/var/lib/mysql .

volumes:
   - /your/folder/location:/var/lib/mysql

Or don’t change that and bind new volume then use mysqldump dump to that folder.


You use systemd to start docker automatically by run this command sudo systemctl enable docker .


PS. I have backup utilities which will backup all persistent data including docker config, but it will make server unaccessible for 5-10 minutes. (You could use this tools to move your server to different host or recovery from major server failure.)

2 Likes

A few notes after having a production stack up for ~2-3 weeks…

  1. Going the stack route vs direct install has some definite advantages in terms of reliability and stability in the sense that any changes made in the actual frappe container are reverted with a server reboot and subsequent stack redeploy. I’ve made use of this a few times by trying something from the bench, not liking the result, and just rebooting the server.

  2. The monitor.yml stack is exceptionally easy to deploy and, paired with any of the readily available system monitoring dashboards that grafana offers/configuring a custom dashboard is painless and offers easy insight into the system status at any given time. Having it in a separate stack makes updating configurations (say, to allow for mysql querying after getting everything setup) makes it even better. However, since I am not on AWS, I have not yet (admittedly, not a priority and haven’t tried too hard), I have not found a way to configure the node_exporter > prometheus for direct host system monitoring. If anyone has gotten the node_exporter bits to work on a non-AWS (digitalocean here, but I suspect for node_exporter configs there’s basically just AWS and non-AWS considerations) and knows a good reference site, please point me towards it.

  3. As of yet, (again, not too much effort and time put into due to dropbox/manual/digitalocean backup options) I have yet to get the ‘frank’ backup solution to actually work. A backup folder is configured, the cron job is setup on the host with the appropriately edited backup.sh’s in place… though… now that I think about it, I may not have chmod-ed the backup.sh to be executable and that might be my issue… so… this one is probably on me.

  4. As I posted about a week or so ago (no responses/solutions, but the post is in my activity if you want to read), it seems that there is some issue with barcode/qr-generation//two-factor-authentication-in-general with the mas-py3 production stack. It fails to generate the qr at all (as evidenced by the fact that there is never a /barcodes/ folder generated when a user initiates the setup with a first login attempt) resulting in a ‘not permitted’ paged when the link in the email is clicked (this is readily explained in twofactor.py(?) or whatever the name of the two-factor code is by seeing that the link checks for the qr=k{code} and returns the ‘not permitted’ page if it’s not found–logs available on request). This is the only major issue I’ve had with the stack and, really, since email two-factor works fine, it’s more of an annoyance than anything. However, since it seems directly a cause of not generating the QR, it has me concerned about bar code generation in general. *note- having been working on barcode integration into print templates for the last day or so, it does seem that there is an ‘in-general’ issue with barcode generation to some degree; not just with qr codes.

To try and resolve the OTP (and barcode generation) issue, I restored the site to a production direct-install on a separate server. It still did not work running as a py3 env, so I migrated to py2. After the py2 migration, it’s working great. So… it comes down to either barcode/qr code generation issues on v11 py3, or issues with the docker image being an ‘essentials-only’ build since the migration to py2 on the stand-alone install both shifted everything down to py2 AND rebuilt all of the libraries and requirements.

Which brings me to my post-notes question.

As far as I can tell, I can’t actually migrate my frappe container (from bash inside the container) to py2, as it’s missing requirements due to being based on the py3 master. Additionally as far as I can tell, there is no way to change or edit the prd.yml to change the configuration of the stack- pretty much the only option would be to backup, delete the stack, rebuild a new stack, and restore. Is this correct, or is there a way to edit the deployed stack configuration to pull the py2 mas instead of the py3 mas without deleting and rebuilding the stack?

TL;DR Version

  • Has anyone else had issues with barcodes/two-factor authentication on either the v11 py3 master (whether docker or stand-alone) OR the current stack yml (whether py2 or py3)?
  • Is there a way to update/change a deployed stack’s yml without deleting and rebuilding the entire stack?

You should try to follow this instruction, the important part is step 5-7(Installing node-exporter and config node-exporter on prometheus.


So this should be problem with v11 py3 since it can’t run on both direct-install and docker-install server. You should create github issues in ERPNext repo and post more detail.


From brief looking into bench migrate-env. I think you should be able to use v11 py2 image without any special command.

Just replace image version in prd.yml

  frappe:
	image: pipech/erpnext-docker-debian:mas-py3-latest

With

  frappe:
	image: pipech/erpnext-docker-debian:mas-py2-latest

Then take down the whole stack then re-deploy with same name, all persistent data won’t disappear.

Or if you don’t want to take down the whole stack you could run docker service update --image pipech/erpnext-docker-debian:mas-py2-latest your-stack-name_frappe

Normally I prefer take down the whole stack since it will also restart redis server which is good thing when update version. But my stack have separate yml for database and proxy so it’s more convenience to take down only frappe stack. I’ll update my repo to have separate yml soon.

:grin:

Good stuff! I’ve really been thinking of trying ERPNext, but couldn’t dedicate a whole server/IP for just trying :slight_smile: thank you.

By the way, how long does it take to deploy the image? I seem to be stuck at “Rebuilding item-dashboard.min.js” ;p

It shouldn’t take long, the whole process should take no more than 5 mins.