chiark
/
gitweb
/
~ianmdlvl
/
elogind.git
/ commitdiff
commit
grep
author
committer
pickaxe
?
search:
re
summary
|
shortlog
|
log
|
commit
| commitdiff |
tree
raw
|
patch
| inline |
side by side
(parent:
6c2b9c8
)
Revert "journal: do not check for number of files"
author
Lennart Poettering
<lennart@poettering.net>
Thu, 29 Jan 2015 01:10:15 +0000
(
02:10
+0100)
committer
Lennart Poettering
<lennart@poettering.net>
Thu, 29 Jan 2015 01:11:55 +0000
(
02:11
+0100)
This reverts commit
b914ea8d379b446c4c9fac4ba181771676ef38cd
.
We really need to put a limit on all our resources, everywhere, and in
particular if we operate on external data.
Hence, let's reintroduce the limit, but bump it substantially, so that
it is guaranteed to be higher than any realistic RLIMIT_NOFILE setting.
src/journal/sd-journal.c
patch
|
blob
|
history
diff --git
a/src/journal/sd-journal.c
b/src/journal/sd-journal.c
index 0268675abbd5fcf0bb471ba93a6fb41969cb7e34..9bc426faf80cbcd1d9ae534248b3877804a3d0e8 100644
(file)
--- a/
src/journal/sd-journal.c
+++ b/
src/journal/sd-journal.c
@@
-43,6
+43,8
@@
#include "replace-var.h"
#include "fileio.h"
+#define JOURNAL_FILES_MAX 7168
+
#define JOURNAL_FILES_RECHECK_USEC (2 * USEC_PER_SEC)
#define REPLACE_VAR_MAX 256
@@
-1196,6
+1198,11
@@
static int add_any_file(sd_journal *j, const char *path) {
if (ordered_hashmap_get(j->files, path))
return 0;
+ if (ordered_hashmap_size(j->files) >= JOURNAL_FILES_MAX) {
+ log_warning("Too many open journal files, not adding %s.", path);
+ return set_put_error(j, -ETOOMANYREFS);
+ }
+
r = journal_file_open(path, O_RDONLY, 0, false, false, NULL, j->mmap, NULL, &f);
if (r < 0)
return r;