AWS Lambda - Python Only

Hey all,

I’ve got a rather interesting problem to solve I’m hoping someone out there has some advice on. My client and I have implemented ERPNext v11 in our own server environment with two different business, and we’re about to acquire another one. The existing businesses are quite normal in terms of transactional workflow and low-spec in terms of quantity of transactions during the day, typically around 300-500. The new business however is a different story, the number of transactions are incredibly peaky eg. 10 an hour for most of the day, then spiking to several thousand that need to processed in under 5 minutes. The reason for the peakiness is due to bidding/acceptances for low-value work, and currency valuations. Interesting stuff (for me anyway).

I’ve created an app that can handle the workflow easy enough, its processing the volume within the time requirement which is causing the headaches. ERPNext/Frappe is a very monolithic infrastructure, not really suited scalable environments (although I’m sure many could argue that).

I’ve done some rather rudimentary speed analysis and found that most of the time during a request is performed on the app server (although the DB could use a little tuning too), I’m not certain of the breakdown but whether I like it or not the chances of me drastically speeding up the ERPNext codebase remain slim without an overhaul (not sure I have the skills either, let alone the time). So I’m left with horizontal scalability, pointing to a single database, and I’d really like it to be flexible (or as-needed), rather than keeping 50+ servers alive 24x7. A single instance running for a second can connect to the DB, create the transaction and lock in the purchase, potentially have a few concurrent instances each quickly creating 5 or so before terminating.

My hope is that Frappe/ERPNext can be slimmed down to just the Python library components (think ‘bench console’), if this is possible then I can figure out how to get jobs to these little lamdba instances via a queue and rely on Amazon to do the scheduling for me at scale. I imagine there will be problems with PDF generation and of course I’d need to scale up my RDS (Amazon’s DB MariaDB store) to handle it, but PDF generation can wait as its part of sending an email and we always wait for those, its locking in the transaction timestamp is the most important thing as it decreases the risk of currency re-valuation.

Does anyone out there have any suggestions for me or things to look our for? My dream would be a version of the Bench Install process that would create that hooks to the database (not local), and the upload point for AWS.

For a suitably skilled person interested in contributing to the community the result I’m also up for putting in dollars, I think it would be a great tool to discuss with large volume businesses currently buying multi-tenant infrastructures like Netsuite because they promise to handle the volume at the same price.

Note: Apologies for the long post, and thanks for getting all the way to the bottom. :slight_smile:

@achillesrasquinha

I don’t normally add people in, but in some web searching it looks like you would be the best person to point me in the right direction…

For those interested, I’ve adjusted the model to a method that receives to a queue (Amazon SQS), and using concurrent lambda requests (max 20) I’ve made API requests to a dedicated box servicing them. It’s not ideal but could be created in a day, and seems to work for now.