Hi guys, today got one that made me scratch my head, but the solution made me blush coz of the simplicity. here we go, hope it helps
Genesis, have been having a problem with one of my RHEL servers but since it was not a critical server have been postponing. the problem was, when i try installing apps or any command that has anything to do with YUM or RPM, i get error
rpmdb: unable to join the environment
error: db3 error(11) from dbenv->open: Resource temporarily unavailable
error: cannot open Packages index using db3 - Resource temporarily unavailable (11)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:
Error: rpmdb open failed
Cut a long story short, finally today i settled to solve it, after moving around, found out my /var was 100%
So easy - clear log files and gained free space 80%. but behold, error persists.
I first thot it was coz one of the files I purged was still being held by a service, this i confirmed with
#lsof /var/
yeap, rsyslog was holding the file, thot of closing it and restarting but on doing so, it failed. so i became adventurous and killed it with #skill rsyslog. Big mistake - on trying to restart - got error:
rsyslog Starting system logger: Can't open or create /var/run/syslogd.pid. Can't write pid.
Knew things are not what they seem. then out of curiosity tried touching a file to /var partition
#touch /var/test
got error - no space on partition. opps, things are hot.
How now how
Haa, suspense, so after some goggling and recalling past issues, i remembered - if u get the problem, check the inode usage, mine for that partition was 100% on var
#df -i
then run the command below moving through the location that u see has high usage
#for i in /var*; do echo $i; find $i |wc -l; done
in my case, issue was loads of empty files in the location "/var/spool/abrt/"
deleted the files and vwola, things went back to normal
Hope this helps someone out there...
Cheers
No comments:
Post a Comment