I am getting Request Time Out error while performing any action.I have found file2ban utilizing my 100% CPU. Kindly Help
Thats generally because of unrotated logs.
Thank you Vamyip.
this is my logrotate file:
weekly rotate 4 compress delaycompress missingok postrotate fail2ban-client flushlogs 1>/dev/null || true endscript # If fail2ban runs as non-root it still needs to have write access # to logfiles. # create 640 fail2ban adm create 640 root adm
is this is the reason for slow performance of my site?
No. Fail2Ban reads logs configured in for jails (generally nginx and ssh) to identify anomalous traffic. If logs of nginx and ssh are not rotated, they become large and fail2ban takes longer to read those logs.
First verify if any of logs are being rotated. If yes, then need to check why nginx’s logs are not rotated.
Thank You VamYip,
System is responding normally after clear my log files.
Thanks for your support.
Can you elaborate more, I have facing same issue, but how to verify this point first, plz detailed steps
then second part also I need a how to steps for that
Thanks a lot
Manually deleted all older logs from system.
- cd /home/var/log
- ls -lah
- find the large size file and clear the content with sudo echo > file name
- cd ~/frappe-bench/logs
- repeat no 3 (for all .log file)
Add the below line inside
- cd etc/logrotate.d/
- sudo nano fail2ban
- add this line inside postrotate block
fail2ban-client flushlogs 1>/dev/null || true
Found the reason why Nginx (and most of the other logs) were not being rotated on the servers setup before March 2020. It was due to a bug in bench’s letsencrypt setup script causing crontab’s
/etc/crontab config to break . So crontab ignored the file altogether and caused the standard cron jobs (cron.daily, cron.weekly, etc) to fail. Note that custom cron jobs that we setup (using
crontab -e) continued to run because they are not referenced via
/etc/crontab . This issue affects only those jobs which were referenced via
This also explains troublesome behavior of fail2ban upon server restart . Since nginx’s logs weren’t being rotated sometimes fail2ban attempted to read all the log since beginning which caused a prolonged CPU spike. Log Rotation will solve that also.