Mysql (mariadb) crashing on DO Centos7 (SOLVED)

Continuing the discussion from New install fails on Centos 7 (SOLVED):

I noticed ERPNext had “crashed” again on DO. The initial login page displayed, but after clicking “Login” I got the “Internal Service Failure” notice again.

I logged in via ssh and immediately ran “ps faux”, then rebooted the server and ran “ps faux” again, the only difference was mysql wasn’t running when I first connected, but was running after the reboot.

Does /var/log/mariadb[1] indicate mariadb crashed because it ran out of memory? I’m running the smallest droplet (512MB RAM), but the database is minimal. I have not installed any graphics or desktop, and the only software I’ve directly installed in addition to ERPNext is Postfix, Dovecot and Samba4 (although they are certainly not loaded, if active at all). However, “top”[2] shows only 40MB of free memory (4 hours after the reboot), is this normal for Centos and this type of system? There aren’t even any real users hitting ERPNext.

Is there any other information I should be looking at after a “crash”?

Thanks,
Dale

[1] /var/log/mariadb

[root@firefly ~]# cat /var/log/mariadb

141124 02:08:23 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
141124 2:08:23 InnoDB: The InnoDB memory heap is disabled
141124 2:08:23 InnoDB: Mutexes and rw_locks use GCC atomic builtins
141124 2:08:23 InnoDB: Compressed tables use zlib 1.2.7
141124 2:08:23 InnoDB: Using Linux native AIO
141124 2:08:23 InnoDB: Initializing buffer pool, size = 128.0M
141124 2:08:23 InnoDB: Completed initialization of buffer pool
141124 2:08:23 InnoDB: highest supported file format is Barracuda.
141124 2:08:24 InnoDB: Waiting for the background threads to start
141124 2:08:25 Percona XtraDB (http://www.percona.com) 5.5.37-MariaDB-34.0 started; log sequence number 14448658
141124 2:08:25 [Note] Plugin ‘FEEDBACK’ is disabled.
141124 2:08:25 [Note] Server socket created on IP: ‘0.0.0.0’.
141124 2:08:25 [Note] Event Scheduler: Loaded 0 events
141124 2:08:25 [Note] /usr/libexec/mysqld: ready for connections.
Version: ‘5.5.37-MariaDB’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 MariaDB Server
141124 9:29:22 [Warning] IP address ‘122.227.228.41’ could not be resolved: Name or service not known
141125 06:02:01 mysqld_safe Number of processes running now: 0
141125 06:02:01 mysqld_safe mysqld restarted
141125 6:02:01 InnoDB: The InnoDB memory heap is disabled
141125 6:02:01 InnoDB: Mutexes and rw_locks use GCC atomic builtins
141125 6:02:01 InnoDB: Compressed tables use zlib 1.2.7
141125 6:02:01 InnoDB: Using Linux native AIO
141125 6:02:01 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137756672 bytes) failed; errno 12
141125 6:02:01 InnoDB: Completed initialization of buffer pool
141125 6:02:01 InnoDB: Fatal error: cannot allocate memory for the buffer pool
141125 6:02:01 [ERROR] Plugin ‘InnoDB’ init function returned error.
141125 6:02:01 [ERROR] Plugin ‘InnoDB’ registration as a STORAGE ENGINE failed.
141125 6:02:01 [ERROR] mysqld: Out of memory (Needed 128917504 bytes)
141125 6:02:01 [Note] Plugin ‘FEEDBACK’ is disabled.
141125 6:02:01 [ERROR] Unknown/unsupported storage engine: InnoDB
141125 6:02:01 [ERROR] Aborting

141125 6:02:01 [Note] /usr/libexec/mysqld: Shutdown complete

141125 06:02:01 mysqld_safe mysqld from pid file /var/run/mariadb/mariadb.pid ended

[2] free memory

[root@firefly ~]# top
top - 03:28:32 up 6:34, 2 users, load average: 0.08, 0.03, 0.05
Tasks: 84 total, 2 running, 82 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.6 us, 0.4 sy, 0.0 ni, 98.9 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 501908 total, 466000 used, 35908 free, 4168 buffers
KiB Swap: 0 total, 0 used, 0 free. 70208 cached Mem

Your diagnosis of it running out of memory is correct. You will have to reduce innodb_buffer_pool to as low as possible. http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size (128M is any way low according to me but still is high enough for the configuration it seems).

Linux always uses all memory it can to cache files.

Thanks pdvyas. If I understand correctly, the default is 134M and you are proposing to reduce it to 128M? is that correct? It doesn’t seem a big change.

I reduced the buffer pool to 64MB and will see what happens (and if server is still running next week). I edited /etc/my.cnf and added:

[mysqld]

innodb_buffer_pool_size=64000000

then restarted mariadb

systemctl restart mariadb.service

Is this the best way, or is it “more” correct to edit /etc/my.cnf.d/server.cnf?

Btw, here is top 18 hours later (compared to first “top” posting)

top - 14:52:04 up 17:57, 1 user, load average: 0.00, 0.01, 0.05
Tasks: 79 total, 2 running, 77 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.4 us, 0.2 sy, 0.0 ni, 99.2 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 501908 total, 463628 used, 38280 free, 3048 buffers
KiB Swap: 0 total, 0 used, 0 free. 70004 cached Mem

It seems nothing much has changed (or at least what is shown). Besides cutting back on the amount of memory MariaDb uses, it is possible to cut back on the amount of memory reddis or memcached take? (i.e. so there is more free memory “reserved” for system spikes, which I suspect may be due to when script kiddies hit the server).

Fyi, no crashes now for 8 days. Looks fixed, but as there is no real load on the system, but I’m concerned about what will happen when there are 10~15 simultaneous users.

Redis should not bloat much.

Reducing the innodb buffer pool should cause more disk I/O but should crash. In previous situation it crashed because it ran out of memory…