[Pacemaker] pacemaker processes RSS growth
Vladislav Bogdanov
bubble at hoster-ok.com
Tue Dec 11 03:52:05 UTC 2012
11.12.2012 05:12, Andrew Beekhof wrote:
> On Mon, Dec 10, 2012 at 11:34 PM, Vladislav Bogdanov
> <bubble at hoster-ok.com> wrote:
>> 10.12.2012 09:56, Vladislav Bogdanov wrote:
>>> 10.12.2012 04:29, Andrew Beekhof wrote:
>>>> On Fri, Dec 7, 2012 at 5:37 PM, Vladislav Bogdanov <bubble at hoster-ok.com> wrote:
>>>>> 06.12.2012 09:04, Vladislav Bogdanov wrote:
>>>>>> 06.12.2012 06:05, Andrew Beekhof wrote:
>>>>>>> I wonder what the growth looks like with the recent libqb fix.
>>>>>>> That could be an explanation.
>>>>>>
>>>>>> Valid point. I will watch.
>>>>>
>>>>> On a almost static cluster the only change in memory state during 24
>>>>> hours is +700kb of shared memory to crmd on a DC. Will look after that
>>>>> one for more time.
>>>
>>> It still grows. ~650-700k per day. I sampled 'maps' and 'smaps' content
>>> from crmd's proc and will look what differs there over the time.
>>
>> smaps tells me it may be in /dev/shm/qb-pengine-event-1735-1736-4-data.
>> 1735 is pengine, 1736 is crmd.
>>
>> Diff of that part:
>> @@ -56,13 +56,13 @@
>> MMUPageSize: 4 kB
>> 7f427fddf000-7f42802df000 rw-s 00000000 00:0f 12332
>> /dev/shm/qb-pengine-event-1735-1736-4-data
>> Size: 5120 kB
>> -Rss: 4180 kB
>> -Pss: 2089 kB
>> +Rss: 4320 kB
>> +Pss: 2159 kB
>> Shared_Clean: 0 kB
>> -Shared_Dirty: 4180 kB
>> +Shared_Dirty: 4320 kB
>> Private_Clean: 0 kB
>> Private_Dirty: 0 kB
>> -Referenced: 4180 kB
>> +Referenced: 4320 kB
>> Anonymous: 0 kB
>> AnonHugePages: 0 kB
>> Swap: 0 kB
>>
>> Does it help to understand what happens?
>
> Not yet, but its very interesting.
> How did you generate this? I'm not familiar with smaps.
cat /proc/$crmd_pid/smaps > /path/to/file
sleep $time_to_wait
diff -u /path/to/file /proc/$crmd_pid/smaps
Some explanations are at f.e.
http://unix.stackexchange.com/questions/33381/getting-information-about-a-process-memory-usage-from-proc-pid-smaps
More information about the Pacemaker
mailing list