I started Evolution this morning and noticed half of my calendar events were missing. Attempts to refresh said calendar resulted in the following errors message:
The calendar backend servicing “XXX” encountered an error.
The reported error was “SQLite error code ‘11’: database disk image is malformed (statement:SELECT * FROM ECacheObjects WHERE ECacheState!=0)”.
Just the start I wanted to my Friday morning. Unfortunately the Evolution documentation didn’t provided any guidelines on fixing a corrupted database and the best advice I found, aside from deleting and recreating the account in Evolution, was to run an integrity check on the offending database [1].
I knew from this guide that my main Evolution configuration was stored in
.local/share/evolution
, but a look through the calendar
subdirectory yielded
nothing. Googling ECacheObjects
curiously didn’t bring anything up, but a
search on GitHub did identify the offending service, evolution-data-server
[2]. Unfortunately while I was able to find something that look like
directory containing “private data” [3], I didn’t know what build
configuration had been used and couldn’t find the folder locally.
Now Linux provides a very helpful feature where it show all files currently marked as open by a given process. To use this, I first needed to find the process responsible for maintaining the calendar.
$ ps aux | grep evolution
sfinucan 2552 0.0 0.3 1717928 59912 ? SLsl Dec07 0:02 /usr/libexec/evolution-source-registry
sfinucan 2616 0.0 0.2 1242008 50640 ? Ssl Dec07 0:00 /usr/libexec/evolution-calendar-factory
sfinucan 2696 0.0 0.9 2803208 182632 ? SLl Dec07 0:19 /usr/libexec/evolution-calendar-factory-subprocess --factory caldav [...]
sfinucan 2738 0.0 0.2 1255468 48452 ? Sl Dec07 0:00 /usr/libexec/evolution-calendar-factory-subprocess --factory contacts [...]
sfinucan 2754 0.0 0.2 1152708 44496 ? Ssl Dec07 0:00 /usr/libexec/evolution-addressbook-factory
sfinucan 2766 0.0 0.2 1329236 47984 ? Sl Dec07 0:00 /usr/libexec/evolution-calendar-factory-subprocess --factory local [...]
sfinucan 2787 0.0 0.2 1440644 46152 ? Sl Dec07 0:00 /usr/libexec/evolution-addressbook-factory-subprocess --factory local [...]
sfinucan 2925 0.0 0.2 1536780 54080 tty2 Sl+ Dec07 0:00 /usr/libexec/evolution/evolution-alarm-notify
sfinucan 9496 0.0 0.2 1443748 59028 ? SLl 10:27 0:00 /usr/libexec/evolution-addressbook-factory-subprocess --factory google [...]
sfinucan 9503 2.0 1.6 4220880 332836 tty2 SLl+ 10:27 0:32 evolution
sfinucan 10611 0.0 0.0 119728 972 pts/1 S+ 10:55 0:00 grep --color=auto evolution
This looked promising and I took the evolution-calendar-factory-subprocess
process with the caldav
factory to be the most likely candidate. Let’s
see what this has open.
$ ls -l /proc/2696/fd | grep *.db
lrwx------. 1 sfinucan sfinucan 64 Dec 7 21:39 12 -> /home/sfinucan/.cache/evolution/calendar/fd3d04f3a29f36ce66c87bca8ef0b4d1d0dc3577/cache.db
lrwx------. 1 sfinucan sfinucan 64 Dec 8 10:27 13 -> /home/sfinucan/.cache/evolution/calendar/853c325e65384d811be1d53e0c6d21706d810a5e/cache.db
lrwx------. 1 sfinucan sfinucan 64 Dec 8 10:27 14 -> /home/sfinucan/.cache/evolution/calendar/9ff6cfa62a76324ab004c9c4a09ecec0a96c0956/cache.db
lrwx------. 1 sfinucan sfinucan 64 Dec 8 10:27 15 -> /home/sfinucan/.cache/evolution/calendar/41464062e9943c630c2bb3171b67d4e1a2cf8a93/cache.db
lrwx------. 1 sfinucan sfinucan 64 Dec 8 10:27 16 -> /home/sfinucan/.cache/evolution/calendar/6e9502d1c38772667d06ed809e1012bb0178a62d/cache.db
lrwx------. 1 sfinucan sfinucan 64 Dec 8 10:27 17 -> /home/sfinucan/.cache/evolution/calendar/f22562ff5b1e02106f69e957a7a18513bec94cab/cache.db
lrwx------. 1 sfinucan sfinucan 64 Dec 8 10:27 18 -> /home/sfinucan/.cache/evolution/calendar/6d11aa1cdaf7e1a1c7ff83b464f319b8bf0b8b08/cache.db
lrwx------. 1 sfinucan sfinucan 64 Dec 8 10:27 22 -> /home/sfinucan/.cache/evolution/calendar/f90f25baabe8d65bb2d1d8197dac7a450bcb46e7/cache.db
lrwx------. 1 sfinucan sfinucan 64 Dec 8 10:27 23 -> /home/sfinucan/.cache/evolution/calendar/fd8b197130da0ca054ab698175e0b3dd16e1b52d/cache.db
That looks promising. Time to kill the various evolution
processes and go
fix those databases.
$ sudo pkill evolution
$ sudo pkill -9 evolution-*
$ $ for i in $(find . -path "./trash" -prune -o -name "cache.db" -print); do
→ echo "$i";
→ sqlite3 "$i" "pragma integrity_check;";
→ done
./41464062e9943c630c2bb3171b67d4e1a2cf8a93/cache.db
ok
./9ff6cfa62a76324ab004c9c4a09ecec0a96c0956/cache.db
ok
./f22562ff5b1e02106f69e957a7a18513bec94cab/cache.db
ok
./f90f25baabe8d65bb2d1d8197dac7a450bcb46e7/cache.db
ok
./fd8b197130da0ca054ab698175e0b3dd16e1b52d/cache.db
ok
./6d11aa1cdaf7e1a1c7ff83b464f319b8bf0b8b08/cache.db
ok
./fd3d04f3a29f36ce66c87bca8ef0b4d1d0dc3577/cache.db
*** in database main ***
On tree page 2935 cell 492: Rowid 3396 out of order
On tree page 2935 cell 491: Rowid 3394 out of order
On tree page 2935 cell 490: Rowid 3392 out of order
On tree page 2935 cell 489: Rowid 3390 out of order
Page 1635: btreeInitPage() returns error code 11
On tree page 2935 cell 487: Rowid 3386 out of order
Page 1634: btreeInitPage() returns error code 11
Page 1762: btreeInitPage() returns error code 11
On tree page 2935 cell 419: Rowid 3289 out of order
Page 1243 is never used
Page 1255 is never used
Page 1263 is never used
row 1934 missing from index IDX_SUMMARY
row 1934 missing from index IDX_COMPLETED
row 1934 missing from index IDX_DUE
row 1934 missing from index IDX_OCCUREND
row 1934 missing from index IDX_OCCURSTART
row 1934 missing from index sqlite_autoindex_ECacheObjects_1
row 1938 missing from index IDX_SUMMARY
row 1938 missing from index IDX_COMPLETED
row 1938 missing from index IDX_DUE
row 1938 missing from index sqlite_autoindex_ECacheObjects_1
row 1939 missing from index IDX_SUMMARY
row 1939 missing from index IDX_COMPLETED
row 1939 missing from index IDX_DUE
row 1941 missing from index IDX_SUMMARY
row 1941 missing from index IDX_COMPLETED
row 1941 missing from index IDX_DUE
row 1941 missing from index sqlite_autoindex_ECacheObjects_1
Error: database disk image is malformed
./853c325e65384d811be1d53e0c6d21706d810a5e/cache.db
ok
./6e9502d1c38772667d06ed809e1012bb0178a62d/cache.db
ok
We have our offending database. Now, we could simply remove this and be done but, to be honest, I don’t really trust the rest of them now. Seeing as everything is already stored in the cloud, I can simply delete these caches.
$ rm -f .
Problem solved.