Launching erpnext using helm chart isn't creating database schema

I’ve been working through the instructions on how to use the helm chart, and although everything seems to be done per-instructions, for some reason the database that I’m setting up is not getting a schema built.

I’ll warn in advance, but i’m a bit of a novice to mariadb…

when i connect to the database, execute “use [database name]” and then “show tables;” it comes back as no tables.

Does the initial schema get created for the managing database when you create the first site?

There won’t be any site created by just installing the helm chart.

It’ll only create needed services and start them.

To create first site (schema), job needs to be created.

About job: https://helm.erpnext.com/kubernetes-resources/create-new-site-job

From what I gather, I am talking about the frappe master database. This is not the “_{short-guid}” database, but the master database, which I called “frappe”

There is no master database for frappe.

The helm chart installs ~/frappe-bench like environment in k8s cluster.

On it’s own ~/frappe-bench is just the environment (python env, node_modules, static assets, common config files)

Creating a site creates the database and site related config files. In case of single machine install, bench new-site command creates the first site. In case of k8s install, Job needs to be created to run the command to create site.

I don’t see any benefit in running mariadb and nfs in cluster, unless your cluster has powerful nodes, hyperconverged volumes and expert scripts to keep things highly available on kubernetes.

For my setup I have a vps under private network and use it as database and nfs server.
I’ve attached volumes to it under /var/lib/mysql for mariadb and /var/nfs/general for nfs and I can scale them on the fly using cloud provider console.

If things grow I may use provider managed services or scale and make the nfs and mariadb HA separately.

I’ll plug my company Castlecraft here for k8s and cloud native related services.

If there is no master database, then why in the yaml for mariadb do we define auth.database; auth.username; and auth.password?

I installed mariadb helm chart without those values and just the value for auth.rootPassword and mariadb helm chart failed to start.

so, after a lot of work, and using a different chart (mariadb galera) I am able to finally get the system online in k8s.

Not sure if I should continue the thread here, or start a new one… but… here’s what i’m seeing now:

Any thoughts on what may cause such an error? (I checked logs with nothing to report that way)

I found this when I tried Mariadb galera. https://stackoverflow.com/a/41207214

Run the following query in your mariadb shell

set global sql_mode = 'STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION';

Alternative set it in values.yaml of mariadb galera.

. . .
  ## Safety
  skip_host_cache
  skip_name_resolve
  max_allowed_packet=16M
  max_connect_errors=1000000
  sql_mode=STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ONLY_FULL_GROUP_BY
. . .

I tried database in cluster, It is not a good idea. I fell back to vps / managed service for database and NFS.

You need to know how galera works and how it is recovered in crash.

1 Like

Let me approach this in a different light.

As of today, there is a helm script available to setup ERPNext in Kubernetes. This is ace. What I am missing in this involvement is the manual work required to get the k8s instance to the point where the helm chart can be used to actually deploy the system. I feel like the manual steps could be a part of the helm chart as well.

I’d like to make an offer to collaborate and otherwise pair code on the helm chart, to get the rest of the way there, so that you have a single values.yaml that you fill-out, execute the chart, and start using the system once complete. If possible, it would be grand that, in the event where you have the existing persistent stores, to flip a switch that says “Use what I have”. I think this is in the script(s) now, but I’ve missed it. If this approach is taken, sub-charts can also be added for things like Mariadb so the versions are pinned, and the proposed setup for Mariadb to set encodings could be done in advance, instead of relying on the guy deploying the system to handle this.

If this is done and is otherwise possible, it makes the rebuild process in the event of a catastrophic failure more reliable and fault tolerant.

Again, I am making an offer to assist here. I’d really like to use ERPNext, but if the only way I am able to bring this online is the VirtualBox appliance, it is prohibitive to use ERPNext, no matter how robust it is.

I am making an offer here to assist… my negativity is more of a “call to action”, which is difficult for folks to accept until after they work with me a few times.

It’s concerning, overall, how brittle the deployment is. I’m over a week and do not have anything stable in nature. what i did get online died with the database (Galera) and I haven’t gotten back online with this since.

With needing custom modules in the system, I’m unsure if a hosting company can accommodate my needs. I also choose to own my data.

Without knowing that the helm chart is legit, and the same for the setup docs, i need to keep searching for an ERP system… I really want to avoid SAP, esp. because of the price tag, but also because i want access to the source code for when the bump in the night happens, or i am audited, the auditors can look at the source cod to see why something happened as it did.

Maybe you can provide add’l details that I am missing at this point…

I had auto-install dependencies in mind.

Here’s my branch https://github.com/revant/helm-1/tree/auto-install-dependencies

Problems:

  • NFS Server as well as nfs-client chart are deprecated, current alternative is https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner or managed service
  • I still recommend having a separate db and other persistence external to cluster (managed service or separately hosted) unless nodes under kubernetes are powerful enough to run OpenEBS or rook/ceph.

If we have alternative for deprecated NFS chart(s) we can do something.

I do agree, but with the unique needs of ERPNext with MariaDB (you cannot just grab any old MariaDB docker image, and the same with using my Synology to host the MariaDB for now, it makes the development/staging use of MariaDB/ERPNext extremely painful.

Going to take a look at your branch, the new link, etc. and see what is what.

Before I go further, I have an iSCSI device that I’ll try first. My issue is that I have limited budget and limited hardware resources to put this all together for a demo mode that is somewhat fault-tolerant.

The link you provided is the one I have been using for NFS mounts, and what caused me issues.

Let’s see how iSCSI goes.

after much hacking… I have things online. I’ve got notes, and I plan on writing something a bit more “formal” on what I did to bring this online.

Getting LetsEncrypt going as well would be welcome. I do see mentions of a module within the ecosystem, so I think that’s a good place. Will start looking to see what is possible, then come back to inquire further.

1 Like

I’m using https://cert-manager.io + https://kubernetes.github.io/ingress-nginx

It automates letsencrypt certificate management.

Check the dns challenge webhook section of cert-manager for setting up wildcard certificate.

Ok. Thank you for that. I am currently down the same road for cert-manager and ingress-nginx (which I think is what is used with microk8s).

My struggle is in trying to use http01 as the challenge, which is the current pain-point. The challenge is not coming back for some reason, and it might have to do with the way the temporary ingress from cert-manager works. I have a working theory to attempt to have a certificate issued separate from the ingress definitions, then trying to use that. Otherwise, I’ll have to use Google Cloud DNS (dns name is hosted on Google Domains) or similar to create the dns challenge.

I discovered my difficulty late last evening. When you define the ingress route, and you want the http01 challenge, you have to setup your annotations as such:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: [Ingress Name]
  namespace: [Namespace in K8s]
  labels:
    app.kubernetes.io/instance: erpnext
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    kubernetes.io/tls-acme: "true"
    acme.cert-manager.io/http01-edit-in-place: "true"

The big part here is the acme.cert-manager.io/http01-edit-in-place: "true" annotation which instructs the tooling around cert-manager to temporarily create the route for /.well-known/... to be routed appropriately for the resolver.

HTH

https://helm.erpnext.com/kubernetes-resources/create-site-ingress. On this page there is sample ingress with annotation

Also there is a prerequisite of creating Issuer or ClusterIssuer before certificates are created. https://helm.erpnext.com/prepare-kubernetes/ this page has few things mentioned about cert-manager, issuer and annotations.

Your findings may help someone in future.

For reference: https://cert-manager.io/docs/usage/ingress/#supported-annotations