I’m abit confused about the docker swarm or k8s. Seems like the whole strategy is to host more hardware for more sites or tenants.
Horizontal scaling with docker swarm or kubernetes means more hardware for one frappe-bench like release.
It is administrator’s choice to use one site or many sites on a frappe-bench.
One frappe-bench means
- python environment with frappe, erpnext and custom apps installed
- static assets built from the source code of apps installed in this environment
- expressjs/socketio for websockets
Do I vertical scale 1 of the site with more powerful ec2 if that site has more users connection that beyond the current ec2 of that site can handle? Or I can deploy more pod / instances (web workers, background worker, redis etc) to handle only for that site?
Vertical scaling is the simplest. Search the forum and you’ll find lot of resource on the topic.
Horizontal scaling with Docker and kubernetes is comparatively new.
I get the feeling that all sites sharing the added instances of workers, redis and db but I’m not that expert on docker and k8s.
Scaling nodes and pods means adding more resources for a running frappe-bench with site(s).
Anyone willing to point me up on ways to deploy 1 frappe site or erpnext site that can handle 10k connections or beyond?
Nothing beats bare-metal if you can maintain it for maximum up time.
If this is internal to company and IT admin team is available to setup and maintain the server(s).
Get the best server that can be afforded at the point in time.
With this kind of server in use even setting up non docker bench will also scale.
I prefer Docker because it also makes available other things to be installed along with ERPNext.
I’d prefer to install many other things on such a powerful bare-metal server.
Recently for one such company with sys admins available,
I’ve setup following on 20CPU x2 XEONS + 96GB RAM server
- docker swarm
- swarm cronjobs
- erpnext+custom app (30 default worker replicas)
- angular frontends
- nodejs backend
If your team is up for it and you are buying many servers for a datacenter, install bare-metal Kubernetes cluster, storage, load balancer, etcd cluster, multiple master and worker nodes.
If bare metal is not possible and global access, maximum up time with public usage is expected?
Get managed Kubernetes from provider of your choice and setup auto-scaling.
Report issues for Docker swarm or kubernetes on https://github.com/frappe/frappe_docker or https://github.com/frappe/helm.
I volunteer and maintain these repos because I use Kubernetes for hosting my sites.