jobqueue: Use explicit retry when refreshLinks can't get a lock
authorTimo Tijhof <krinklemail@gmail.com>
Tue, 28 Aug 2018 21:14:25 +0000 (22:14 +0100)
committerTimo Tijhof <krinklemail@gmail.com>
Tue, 28 Aug 2018 21:39:01 +0000 (22:39 +0100)
commit3d758d74956fb52cff50479e3f0f6545850df49d
treea0d624e997162471a9c1e45a65c53ed29e881a27
parent9686c83554446d47f3157a3de180ee3e08b5f75a
jobqueue: Use explicit retry when refreshLinks can't get a lock

While RefreshLinksJob is de-duplicated by page-id, it is possible
for two jobs to run for the same page ID if the second one was queued
after the first one started running. In that case they the newer
one must not be skipped or ignored because it will have newer
information to record to the database, but it also has no way
to stop the old one, and we can't run them concurrently.

Instead of letting the lock exception mark the job as error,
making it implicitly retry, do this more explicitly, which avoids
logspam.

Bug: T170596
Co-Authored-By: Aaron Schulz <aschulz@wikimedia.org>
Change-Id: Id2852d73d00daf83f72cf5ff778c638083f5fc73
includes/deferred/LinksDeletionUpdate.php
includes/deferred/LinksUpdate.php
includes/jobqueue/jobs/DeleteLinksJob.php
includes/jobqueue/jobs/RefreshLinksJob.php