0
3条评论

综述:当前mongod平均cpu为40+,间歇性会达到140+,基本确保所有入库和查询都走了索引,使用mongotop和mongostat查看信息如下面所示(抱歉,好像这里不能用markdown,所以不是太清晰)。使用db.currentOp()确实能看到间歇性的操纵很耗时,但是走了索引的,下面也提供了信息,感谢大佬指导。

ns total read write 2022-01-25T16:49:02+08:00
xxx_88.playerActivity 9ms 0ms 9ms
yyy_53.taskItem 9ms 0ms 9ms
yyy_29.mapRiskLand 6ms 0ms 6ms
xxx_137.taskItem 5ms 0ms 5ms
xxx_99.mapRiskLand 5ms 0ms 5ms
xxx_68.playerActivity 4ms 0ms 4ms
xxx_88.taskItem 4ms 0ms 4ms
xxx_89.playerActivity 4ms 0ms 4ms
yyy_29.playerActivity 4ms 0ms 4ms
xxx_106.playerActivity 3ms 0ms 3ms

insert query update delete getmore command % dirty % used flushes vsize res qr|qw ar|aw netIn netOut conn time
*0 97 1551 68 3 3|0 3.9 80.0 0 15.0G 13.6G 0|0 0|1 1.88m 5.86m 569 2022-01-25T16:53:40+08:00
*0 8 1385 46 0 2|0 3.9 80.0 0 15.0G 13.6G 0|1 0|0 1.90m 147k 569 2022-01-25T16:53:41+08:00
*0 5 994 19 0 3|0 3.9 80.0 0 15.0G 13.6G 0|0 0|0 1.54m 5.32m 569 2022-01-25T16:53:42+08:00
*0 94 2141 161 2 3|0 4.0 80.0 0 15.0G 13.6G 0|1 0|0 2.57m 5.75m 569 2022-01-25T16:53:43+08:00
*0 13 1254 35 0 8|0 4.0 80.0 0 15.0G 13.6G 0|0 0|0 1.99m 5.39m 569 2022-01-25T16:53:44+08:00
*0 16 1360 44 0 3|0 4.0 80.0 0 15.0G 13.6G 0|0 0|0 1.89m 5.44m 569 2022-01-25T16:53:45+08:00
*0 1 1549 93 0 3|0 4.0 80.0 0 15.0G 13.6G 0|0 0|0 2.20m 5.34m 569 2022-01-25T16:53:46+08:00
*0 *0 1707 155 0 3|0 4.1 80.0 0 15.0G 13.6G 0|0 0|0 1.86m 5.34m 569 2022-01-25T16:53:47+08:00
*0 *0 1566 148 0 2|0 4.1 80.0 0 15.0G 13.6G 0|1 0|0 2.20m 162k 569 2022-01-25T16:53:48+08:00
*0 119 1639 97 3 3|0 4.1 80.0 0 15.0G 13.6G 0|0 0|0 1.77m 6.41m 569 2022-01-25T16:53:49+08:00

{
“desc” : “conn11697”,
“threadId” : “140580703586048”,
“connectionId” : 11697,
“client” : “10.206.0.15:56982”,
“active” : true,
“opid” : -730723337,
“secs_running” : 0,
“microsecs_running” : NumberLong(7817),
“op” : “query”,
“ns” : “xxx.heroDerma”,
“query” : {
“find”:”heroDerma”,
“filter”:{
“playerId”:12739719
}
},
“planSummary” : “IXSCAN { playerId: 1 }”,
“numYields” : 0,
“locks” : {
“Global”:”r”,
“Database”:”r”,
“Collection”:”r”
},
“waitingForLock” : false,
“lockStats” : {
“Global”:{
“acquireCount”:{
“r”:NumberLong(2)
}
},
“Database”:{
“acquireCount”:{
“r”:NumberLong(1)
}
},
“Collection”:{
“acquireCount”:{
“r”:NumberLong(1)
}
}
}
}

发表新评论

你的问题原因是什么?

@xiaoxu dirty和used百分比分别超过5%和80%了,后台线程在执行淘汰导致cpu上去了;刚刚超,能淘汰下去,然后又会上来,所以是间歇性的。