ERPNext Scalability

Dears

can Erpnext work in big companies?

many users and branches
and a lot of daily transaction

thanks a lot

2 Likes

In most cases your bottleneck is going to be the database. @hfarid has implemented ERPNext in a company with 30,000 employees, so its possible, but there will be a few things you will have to optimize

4 Likes

Sure it can work with big companies.
You have to do the following :slight_smile:

  1. You have to increase the timeout for NGINX and supervisor this could be done by
    bench config http_timeout
    bench setup supervisor
    bench setup nginx

  2. Some Huge transactions could be made using Triggers in Database or by using multi record update using MariaDB update statement.

  3. You have to update the configuration for MariaDB (my.cnf) and change the appropriate parameters for big queries.

And you will run smoothly for big company.

Best luck.

9 Likes

Thanks @h_farid_gm for information

@h_farid_gm Would you be willing to speak on this subject at a meetup or a community webinar? Your experience and use case seems like it would make for really good educational content.

3 Likes

600 sec for timeout should work?

You mean for innodb buffer size? Can you share configuration?

What about using stored procedures and/or FK and cascade updates and delete?

Do you think Frappe ORM could be improved in some to get better performance? As far as i know when document is deleted it enqueue a background job to check and delete every linked documents, could this be demanded to database using FK?

2 Likes

The my.cnf changed parameters are as follows but it depends on how much RAM you have in your server
this case is for a server with 16 GB RAM :

max_connections = 100
connect_timeout = 50
wait_timeout = 6000
max_allowed_packet = 256M
thread_cache_size = 128
sort_buffer_size = 64M
bulk_insert_buffer_size = 128M
tmp_table_size = 256M
max_heap_table_size = 256M
net_buffer_length = 100K

key_buffer_size = 512M

table_open_cache = 400
myisam_sort_buffer_size = 512M
concurrent_insert = 2
read_buffer_size = 64M
read_rnd_buffer_size = 32M

query_cache_limit = 8K
query_cache_size = 256M

slow_query_log_file = /var/log/mysql/mariadb-slow.log
long_query_time = 10

* InnoDB

innodb_log_file_size = 1G
innodb_buffer_pool_size = 2G
innodb_log_buffer_size = 8M
innodb_file_per_table = 1
innodb_open_files = 400
innodb_io_capacity = 400
innodb_flush_method = O_DIRECT

11 Likes

For the multi records update I prefer doing it through python using frappe.db.sql function.
You can use stored procedure or triggers also.

3 Likes

@hfarid: How do you handle with the multi-branch permission? For example, a branch A only see the record and statistics from branch A, not branch B. Or the case of the warehouse manager in 1 warehouse can make some record on other warehouse even they don’t have the permission.

1 Like

You can do that from user permissions.

2 Likes

@hfarid Would you be willing to do a wiki post/guide about this? I imagine the post could cover the scale (of the implementation) you required and the changes (to settings) you have made, the infrastructure you are using and a rough estimate of costs and maintenance the system requires. This would be extremely useful for a lot of our users.

2 Likes

We just offload our entire DB to Amazon RDS that allows us to scale very easily.

4 Likes

@vjFaLk RDS stands for?

Sorry, should have been more specific. It’s the cloud database service provided by Amazon :

Also, our servers are right on EC2 and they’re on the same region, so there’s almost no latency.

@vjFaLk Oh i know about that service , it seems kind of hard to estimate monthly cost for it …isn’t it?

Here’s the pricing :

https://aws.amazon.com/rds/mariadb/pricing/

If you get T2Large (the last one) you will pay $2.2k per year. Not too bad.