)]}'
{
  "commit": "d6629859b36d953a4b1369b749f178736911bf10",
  "tree": "154cfc0d8ff3b65f59b9052bcc41edaabf974063",
  "parents": [
    "50069a5851323ba5def0e414a21e234345016870"
  ],
  "author": {
    "name": "Doug Ledford",
    "email": "dledford@redhat.com",
    "time": "Thu May 31 16:26:35 2012 -0700"
  },
  "committer": {
    "name": "Linus Torvalds",
    "email": "torvalds@linux-foundation.org",
    "time": "Thu May 31 17:49:31 2012 -0700"
  },
  "message": "ipc/mqueue: improve performance of send/recv\n\nThe existing implementation of the POSIX message queue send and recv\nfunctions is, well, abysmal.  Even worse than abysmal.  I submitted a\npatch to increase the maximum POSIX message queue limit to 65536 due to\ncustomer needs, however, upon looking over the send/recv implementation, I\nrealized that my customer needs help with that too even if they don\u0027t know\nit.  The basic problem is that, given the fairly typical use case scenario\nfor a large queue of queueing lots of messages all at the same priority (I\nverified with my customer that this is indeed what their app does), the\nmsg_insert routine is basically a frikkin\u0027 bubble sort.  I mean, whoa,\nthat\u0027s *so* middle school.\n\nOK, OK, to not slam the original author too much, I\u0027m sure they didn\u0027t\nenvision a queue depth of 50,000+ messages.  No one would think that\nmoving elements in an array, one at a time, and dereferencing each pointer\nin that array to check priority of the message being pointed too, again\none at a time, for 50,000+ times would be good.  So let\u0027s assume that, as\nis typical, the users have found a way to break our code simply by using\nit in a way we didn\u0027t envision.  Fair enough.\n\n\"So, just how broken is it?\", you ask.  I wondered the same thing, so I\nwrote an app to let me know.  It\u0027s my next patch.  It gave me some\ninteresting results.  Here\u0027s what it tested:\n\nInterference with other apps - In continuous mode, the app just sits there\nand hits a message queue forever, while you go do something productive on\nanother terminal using other CPUs.  You then measure how long it takes you\nto do that something productive.  Then you restart the app in fake\ncontinuous mode, and it sits in a tight loop on a CPU while you repeat\nyour tests.  The whole point of this is to keep one CPU tied up (so it\ncan\u0027t be used in your other work) but in one case tied up hitting the\nmqueue code so we can see the effect of walking that 65,528 element array\none pointer at a time on the global CPU cache.  If it\u0027s bad, then it will\nslow down your app on the other CPUs just by polluting cache mercilessly.\nIn the fake case, it will be in a tight loop, but not polluting cache.\nTesting the mqueue subsystem directly - Here we just run a number of tests\nto see how the mqueue subsystem performs under different conditions.  A\ncouple conditions are known to be worst case for the old system, and some\nroutines, so this tests all of them.\n\nSo, on to the results already:\n\nSubsystem/Test                  Old                         New\n\nTime to compile linux\nkernel (make -j12 on a\n6 core CPU)\n  Running mqueue test     user 49m10.744s             user 45m26.294s\n\t\t\t   sys  5m51.924s              sys  4m59.894s\n\t\t\t total 55m02.668s            total 50m26.188s\n\n  Running fake test       user 45m32.686s             user 45m18.552s\n                           sys  5m12.465s              sys  4m56.468s\n                         total 50m45.151s            total 50m15.020s\n\n  % slowdown from mqueue\n    cache thrashing            ~8%                         ~.5%\n\nAvg time to send/recv (in nanoseconds per message)\n  when queue empty            305/288                    349/318\n  when queue full (65528 messages)\n    constant priority      526589/823                    362/314\n    increasing priority    403105/916                    495/445\n    decreasing priority     73420/594                    482/409\n    random priority        280147/920                    546/436\n\nTime to fill/drain queue (65528 messages, in seconds)\n  constant priority         17.37/.12                    .13/.12\n  increasing priority        4.14/.14                    .21/.18\n  decreasing priority       12.93/.13                    .21/.18\n  random priority            8.88/.16                    .22/.17\n\nSo, I think the results speak for themselves.  It\u0027s possible this\nimplementation could be improved by cacheing at least one priority level\nin the node tree (that would bring the queue empty performance more in\nline with the old implementation), but this works and is *so* much better\nthan what we had, especially for the common case of a single priority in\nuse, that further refinements can be in follow on patches.\n\n[akpm@linux-foundation.org: fix typo in comment, remove stray semicolon]\n[levinsasha928@gmail.com: use correct gfp flags in msg_insert]\nSigned-off-by: Doug Ledford \u003cdledford@redhat.com\u003e\nCc: Stephen Rothwell \u003csfr@canb.auug.org.au\u003e\nCc: Manfred Spraul \u003cmanfred@colorfullife.com\u003e\nAcked-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nSigned-off-by: Sasha Levin \u003clevinsasha928@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n",
  "tree_diff": [
    {
      "type": "modify",
      "old_id": "609d53f7a915f339f9934e1725e02c80b9df433e",
      "old_mode": 33188,
      "old_path": "ipc/mqueue.c",
      "new_id": "3c72a05dc32644079d6f20d17617b2fa19a32496",
      "new_mode": 33188,
      "new_path": "ipc/mqueue.c"
    }
  ]
}
