X-Git-Url: http://git.heureux-cyclage.org/?a=blobdiff_plain;f=includes%2Fjob%2FREADME;h=c11d5a78e0c4c1bfd615d1bd917ae653d0ae5547;hb=c21a77b35601753a50e236f1b231cd1383fd40a2;hp=57c92e9270f7f6dc35d9f066acc7d3e49859c4ba;hpb=31069e398976ae81fa3919d90f98c52b048244fe;p=lhc%2Fweb%2Fwiklou.git diff --git a/includes/job/README b/includes/job/README index 57c92e9270..c11d5a78e0 100644 --- a/includes/job/README +++ b/includes/job/README @@ -7,7 +7,6 @@ Notes on the Job queuing system architecture. \section intro Introduction The data model consist of the following main components: - * The Job object represents a particular deferred task that happens in the background. All jobs subclass the Job object and put the main logic in the function called run(). @@ -15,6 +14,8 @@ The data model consist of the following main components: For example there may be a queue for email jobs and a queue for squid purge jobs. +\section jobqueue Job queues + Each job type has its own queue and is associated to a storage medium. One queue might save its jobs in redis while another one uses would use a database. @@ -27,6 +28,7 @@ The factory class JobQueueGroup provides helper functions: The following queue classes are available: * JobQueueDB (stores jobs in the `job` table in a database) +* JobQueueRedis (stores jobs in a redis server) All queue classes support some basic operations (though some may be no-ops): * enqueueing a batch of jobs @@ -46,6 +48,27 @@ dequeued by a job runner, which crashes before completion, the job will be lost. Some jobs, like purging squid caches after a template change, may not require durable queues, whereas other jobs might be more important. +\section aggregator Job queue aggregator + +The aggregators are used by nextJobDB.php, which is a script that will return a +random ready queue (on any wiki in the farm) that can be used with runJobs.php. +This can be used in conjunction with any scripts that handle wiki farm job queues. +Note that $wgLocalDatabases defines what wikis are in the wiki farm. + +Since each job type has its own queue, and wiki-farms may have many wikis, +there might be a large number of queues to keep track of. To avoid wasting +large amounts of time polling empty queues, aggregators exists to keep track +of which queues are ready. + +The following queue aggregator classes are available: +* JobQueueAggregatorMemc (uses $wgMemc to track ready queues) +* JobQueueAggregatorRedis (uses a redis server to track ready queues) + +Some aggregators cache data for a few minutes while others may be always up to date. +This can be an important factor for jobs that need a low pickup time (or latency). + +\section jobs Jobs + Callers should also try to make jobs maintain correctness when executed twice. This is useful for queues that actually implement ack(), since they may recycle dequeued but un-acknowledged jobs back into the queue to be attempted again. If