Initial import from tarball

This commit is contained in:
Jérôme Schneider 2014-11-01 19:45:25 +01:00
commit b618a5d7ed
71 changed files with 20404 additions and 0 deletions

673
CHANGES.txt Normal file
View File

@ -0,0 +1,673 @@
3.3.0.18 - 2014-06-20
---------------------
- Now compiles on GNU/kFreeBSD
Contributed by Michael Fladischer.
- Pool: `AF_PIPE` address fixed so that it works on recent Windows versions
in combination with Python 2.7.7.
Fix contributed by Joshua Tacoma.
- Pool: Fix for `Supervisor object has no attribute _children` error.
Fix contributed by Andres Riancho.
- Pool: Fixed bug with human_status(None).
- Pool: shrink did not work properly if asked to remove more than 1 process.
3.3.0.17 - 2014-04-16
---------------------
- Fixes SemLock on Python 3.4 (Issue #107) when using
``forking_enable(False)``.
- Pool: Include more useful exitcode information when processes exit.
3.3.0.16 - 2014-02-11
---------------------
- Previous release was missing the billiard.py3 package from MANIFEST
so the installation would not work on Python 3.
3.3.0.15 - 2014-02-10
---------------------
- Pool: Fixed "cannot join process not started" error.
- Now uses billiard.py2 and billiard.py3 specific packages that are installed
depending on the python version used.
This way the installation will not import version specific modules (and
possibly crash).
3.3.0.14 - 2014-01-17
---------------------
- Fixed problem with our backwards compatible ``bytes`` wrapper
(Issue #103).
- No longer expects frozen applications to have a valid ``__file__``
attribute.
Fix contributed by George Sibble.
3.3.0.13 - 2013-12-13
---------------------
- Fixes compatability with Python < 2.7.6
- No longer attempts to handle ``SIGBUS``
Contributed by Vishal Vatsa.
- Non-thread based pool now only handles signals:
``SIGHUP``, ``SIGQUIT``, ``SIGTERM``, ``SIGUSR1``,
``SIGUSR2``.
- setup.py: Only show compilation warning for build related commands.
3.3.0.12 - 2013-12-09
---------------------
- Fixed installation for Python 3.
Contributed by Rickert Mulder.
- Pool: Fixed bug with maxtasksperchild.
Fix contributed by Ionel Cristian Maries.
- Pool: Fixed bug in maintain_pool.
3.3.0.11 - 2013-12-03
---------------------
- Fixed Unicode error when installing the distribution (Issue #89).
- Daemonic processes are now allowed to have children.
But note that it will not be possible to automatically
terminate them when the process exits.
See discussion at https://github.com/celery/celery/issues/1709
- Pool: Would not always be able to detect that a process exited.
3.3.0.10 - 2013-12-02
---------------------
- Windows: Fixed problem with missing ``WAITABANDONED_0``
Fix contributed by Matthias Wagner
- Windows: PipeConnection can now be inherited.
Fix contributed by Matthias Wagner
3.3.0.9 - 2013-12-02
--------------------
- Temporary workaround for Celery maxtasksperchild issue.
Fix contributed by Ionel Cristian Maries.
3.3.0.8 - 2013-11-21
--------------------
- Now also sets ``multiprocessing.current_process`` for compatibility
with loggings ``processName`` field.
3.3.0.7 - 2013-11-15
--------------------
- Fixed compatibility with PyPy 2.1 + 2.2.
- Fixed problem in pypy detection.
Fix contributed by Tin Tvrtkovic.
- Now uses ``ctypes.find_library`` instead of hardcoded path to find
the OS X CoreServices framework.
Fix contributed by Moritz Kassner.
3.3.0.6 - 2013-11-12
--------------------
- Now works without C extension again.
- New ``_billiard.read(fd, buffer, [len, ])`` function
implements os.read with buffer support (new buffer API)
- New pure-python implementation of ``Connection.send_offset``.
3.3.0.5 - 2013-11-11
--------------------
- All platforms except for Windows/PyPy/Jython now requires the C extension.
3.3.0.4 - 2013-11-11
--------------------
- Fixed problem with Python3 and setblocking.
3.3.0.3 - 2013-11-09
--------------------
- Now works on Windows again.
3.3.0.2 - 2013-11-08
--------------------
- ApplyResult.terminate() may be set to signify that the job
must not be executed. It can be used in combination with
Pool.terminate_job.
- Pipe/_SimpleQueue: Now supports rnonblock/wnonblock arguments
to set the read or write end of the pipe to be nonblocking.
- Pool: Log message included exception info but exception happened
in another process so the resulting traceback was wrong.
- Pool: Worker process can now prepare results before they are sent
back to the main process (using ``Worker.prepare_result``).
3.3.0.1 - 2013-11-04
--------------------
- Pool: New ``correlation_id`` argument to ``apply_async`` can be
used to set a related id for the ``ApplyResult`` object returned:
>>> r = pool.apply_async(target, args, kwargs, correlation_id='foo')
>>> r.correlation_id
'foo'
- Pool: New callback `on_process_exit` is called when a pool
process exits, with signature ``(pid, exitcode)``.
Contributed by Daniel M. Taub.
- Pool: Improved the too many restarts detection.
3.3.0.0 - 2013-10-14
--------------------
- Dual code base now runs on Python 2.6+ and Python 3.
- No longer compatible with Python 2.5
- Includes many changes from multiprocessing in 3.4.
- Now uses ``time.monotonic`` when available, also including
fallback implementations for Linux and OS X.
- No longer cleans up after receiving SIGILL, SIGSEGV or SIGFPE
Contributed by Kevin Blackham
- ``Finalize`` and ``register_after_fork`` is now aliases to multiprocessing.
It's better to import these from multiprocessing directly now
so that there aren't multiple registries.
- New `billiard.queues._SimpleQueue` that does not use semaphores.
- Pool: Can now be extended to support using multiple IPC queues.
- Pool: Can now use async I/O to write to pool IPC queues.
- Pool: New ``Worker.on_loop_stop`` handler can be used to add actions
at pool worker process shutdown.
Note that, like all finalization handlers, there is no guarantee that
this will be executed.
Contributed by dmtaub.
2.7.3.30 - 2013-06-28
---------------------
- Fixed ImportError in billiard._ext
2.7.3.29 - 2013-06-28
---------------------
- Compilation: Fixed improper handling of HAVE_SEM_OPEN (Issue #55)
Fix contributed by Krzysztof Jagiello.
- Process now releases logging locks after fork.
This previously happened in Pool, but it was done too late
as processes logs when they bootstrap.
- Pool.terminate_job now ignores `No such process` errors.
- billiard.Pool entrypoint did not support new arguments
to billiard.pool.Pool
- Connection inbound buffer size increased from 1kb to 128kb.
- C extension cleaned up by properly adding a namespace to symbols.
- _exit_function now works even if thread wakes up after gc collect.
2.7.3.28 - 2013-04-16
---------------------
- Pool: Fixed regression that disabled the deadlock
fix in 2.7.3.24
- Pool: RestartFreqExceeded could be raised prematurely.
- Process: Include pid in startup and process INFO logs.
2.7.3.27 - 2013-04-12
---------------------
- Manager now works again.
- Python 3 fixes for billiard.connection.
- Fixed invalid argument bug when running on Python 3.3
Fix contributed by Nathan Wan.
- Ignore OSError when setting up signal handlers.
2.7.3.26 - 2013-04-09
---------------------
- Pool: Child processes must ignore SIGINT.
2.7.3.25 - 2013-04-09
---------------------
- Pool: 2.7.3.24 broke support for subprocesses (Issue #48).
Signals that should be ignored were instead handled
by terminating.
2.7.3.24 - 2013-04-08
---------------------
- Pool: Make sure finally blocks are called when process exits
due to a signal.
This fixes a deadlock problem when the process is killed
while having acquired the shared semaphore. However, this solution
does not protect against the processes being killed, a more elaborate
solution is required for that. Hopefully this will be fixed soon in a
later version.
- Pool: Can now use GDB to debug pool child processes.
- Fixes Python 3 compatibility problems.
Contributed by Albertas Agejevas.
2.7.3.23 - 2013-03-22
---------------------
- Windows: Now catches SystemExit from setuptools while trying to build
the C extension (Issue #41).
2.7.3.22 - 2013-03-08
---------------------
- Pool: apply_async now supports a ``callbacks_propagate`` keyword
argument that can be a tuple of exceptions to propagate in callbacks.
(callback, errback, accept_callback, timeout_callback).
- Errors are no longer logged for OK and recycle exit codes.
This would cause normal maxtasksperchild recycled process
to log an error.
- Fixed Python 2.5 compatibility problem (Issue #33).
- FreeBSD: Compilation now disables semaphores if Python was built
without it (Issue #40).
Contributed by William Grzybowski
2.7.3.21 - 2013-02-11
---------------------
- Fixed typo EX_REUSE -> EX_RECYCLE
- Code now conforms to new pep8.py rules.
2.7.3.20 - 2013-02-08
---------------------
- Pool: Disable restart limit if maxR is not set.
- Pool: Now uses os.kill instead of signal.signal.
Contributed by Lukasz Langa
- Fixed name error in process.py
- Pool: ApplyResult.get now properly raises exceptions.
Fix contributed by xentac.
2.7.3.19 - 2012-11-30
---------------------
- Fixes problem at shutdown when gc has collected symbols.
- Pool now always uses _kill for Py2.5 compatibility on Windows (Issue #32).
- Fixes Python 3 compatibility issues
2.7.3.18 - 2012-11-05
---------------------
- [Pool] Fix for check_timeouts if not set.
Fix contributed by Dmitry Sukhov
- Fixed pickle problem with Traceback.
Code.frame.__loader__ is now ignored as it may be set to
an unpickleable object.
- The Django old-layout warning was always showing.
2.7.3.17 - 2012-09-26
---------------------
- Fixes typo
2.7.3.16 - 2012-09-26
---------------------
- Windows: Fixes for SemLock._rebuild (Issue #24).
- Pool: Job terminated with terminate_job now raises
billiard.exceptions.Terminated.
2.7.3.15 - 2012-09-21
---------------------
- Windows: Fixes unpickling of SemLock when using fallback.
- Windows: Fixes installation when no C compiler.
2.7.3.14 - 2012-09-20
---------------------
- Installation now works again for Python 3.
2.7.3.13 - 2012-09-14
---------------------
- Merged with Python trunk (many authors, many fixes: see Python changelog in
trunk).
- Using execv now also works with older Django projects using setup_environ
(Issue #10).
- Billiard now installs with a warning that the C extension could not be built
if a compiler is not installed or the build fails in some other way.
It really is recommended to have the C extension installed when running
with force execv, but this change also makes it easier to install.
- Pool: Hard timeouts now sends KILL shortly after TERM so that C extensions
cannot block the signal.
Python signal handlers are called in the interpreter, so they cannot
be called while a C extension is blocking the interpreter from running.
- Now uses a timeout value for Thread.join that doesn't exceed the maximum
on some platforms.
- Fixed bug in the SemLock fallback used when C extensions not installed.
Fix contributed by Mher Movsisyan.
- Pool: Now sets a Process.index attribute for every process in the pool.
This number will always be between 0 and concurrency-1, and
can be used to e.g. create a logfile for each process in the pool
without creating a new logfile whenever a process is replaced.
2.7.3.12 - 2012-08-05
---------------------
- Fixed Python 2.5 compatibility issue.
- New Pool.terminate_job(pid) to terminate a job without raising WorkerLostError
2.7.3.11 - 2012-08-01
---------------------
- Adds support for FreeBSD 7+
Fix contributed by koobs.
- Pool: New argument ``allow_restart`` is now required to enable
the pool process sentinel that is required to restart the pool.
It's disabled by default, which reduces the number of file
descriptors/semaphores required to run the pool.
- Pool: Now emits a warning if a worker process exited with error-code.
But not if the error code is 155, which is now returned if the worker
process was recycled (maxtasksperchild).
- Python 3 compatibility fixes.
- Python 2.5 compatibility fixes.
2.7.3.10 - 2012-06-26
---------------------
- The ``TimeLimitExceeded`` exception string representation
only included the seconds as a number, it now gives a more human
friendly description.
- Fixed typo in ``LaxBoundedSemaphore.shrink``.
- Pool: ``ResultHandler.handle_event`` no longer requires
any arguments.
- setup.py bdist now works
2.7.3.9 - 2012-06-03
--------------------
- Environment variable ``MP_MAIN_FILE`` envvar is now set to
the path of the ``__main__`` module when execv is enabled.
- Pool: Errors occurring in the TaskHandler are now reported.
2.7.3.8 - 2012-06-01
--------------------
- Can now be installed on Py 3.2
- Issue #12091: simplify ApplyResult and MapResult with threading.Event
Patch by Charles-Francois Natali
- Pool: Support running without TimeoutHandler thread.
- The with_*_thread arguments has also been replaced with
a single `threads=True` argument.
- Two new pool callbacks:
- ``on_timeout_set(job, soft, hard)``
Applied when a task is executed with a timeout.
- ``on_timeout_cancel(job)``
Applied when a timeout is cancelled (the job completed)
2.7.3.7 - 2012-05-21
--------------------
- Fixes Python 2.5 support.
2.7.3.6 - 2012-05-21
--------------------
- Pool: Can now be used in an event loop, without starting the supporting
threads (TimeoutHandler still not supported)
To facilitate this the pool has gained the following keyword arguments:
- ``with_task_thread``
- ``with_result_thread``
- ``with_supervisor_thread``
- ``on_process_up``
Callback called with Process instance as argument
whenever a new worker process is added.
Used to add new process fds to the eventloop::
def on_process_up(proc):
hub.add_reader(proc.sentinel, pool.maintain_pool)
- ``on_process_down``
Callback called with Process instance as argument
whenever a new worker process is found dead.
Used to remove process fds from the eventloop::
def on_process_down(proc):
hub.remove(proc.sentinel)
- ``semaphore``
Sets the semaphore used to protect from adding new items to the
pool when no processes available. The default is a threaded
one, so this can be used to change to an async semaphore.
And the following attributes::
- ``readers``
A map of ``fd`` -> ``callback``, to be registered in an eventloop.
Currently this is only the result outqueue with a callback
that processes all currently incoming results.
And the following methods::
- ``did_start_ok``
To be called after starting the pool, and after setting up the
eventloop with the pool fds, to ensure that the worker processes
didn't immediately exit caused by an error (internal/memory).
- ``maintain_pool``
Public version of ``_maintain_pool`` that handles max restarts.
- Pool: Process too frequent restart protection now only counts if the process
had a non-successful exit-code.
This to take into account the maxtasksperchild option, and allowing
processes to exit cleanly on their own.
- Pool: New options max_restart + max_restart_freq
This means that the supervisor can't restart processes
faster than max_restart' times per max_restart_freq seconds
(like the Erlang supervisor maxR & maxT settings).
The pool is closed and joined if the max restart
frequency is exceeded, where previously it would keep restarting
at an unlimited rate, possibly crashing the system.
The current default value is to stop if it exceeds
100 * process_count restarts in 1 seconds. This may change later.
It will only count processes with an unsuccessful exit code,
this is to take into account the ``maxtasksperchild`` setting
and code that voluntarily exits.
- Pool: The ``WorkerLostError`` message now includes the exit-code of the
process that disappeared.
2.7.3.5 - 2012-05-09
--------------------
- Now always cleans up after ``sys.exc_info()`` to avoid
cyclic references.
- ExceptionInfo without arguments now defaults to ``sys.exc_info``.
- Forking can now be disabled using the
``MULTIPROCESSING_FORKING_DISABLE`` environment variable.
Also this envvar is set so that the behavior is inherited
after execv.
- The semaphore cleanup process started when execv is used
now sets a useful process name if the ``setproctitle``
module is installed.
- Sets the ``FORKED_BY_MULTIPROCESSING``
environment variable if forking is disabled.
2.7.3.4 - 2012-04-27
--------------------
- Added `billiard.ensure_multiprocessing()`
Raises NotImplementedError if the platform does not support
multiprocessing (e.g. Jython).
2.7.3.3 - 2012-04-23
--------------------
- PyPy now falls back to using its internal _multiprocessing module,
so everything works except for forking_enable(False) (which
silently degrades).
- Fixed Python 2.5 compat. issues.
- Uses more with statements
- Merged some of the changes from the Python 3 branch.
2.7.3.2 - 2012-04-20
--------------------
- Now installs on PyPy/Jython (but does not work).
2.7.3.1 - 2012-04-20
--------------------
- Python 2.5 support added.
2.7.3.0 - 2012-04-20
--------------------
- Updated from Python 2.7.3
- Python 2.4 support removed, now only supports 2.5, 2.6 and 2.7.
(may consider py3k support at some point).
- Pool improvements from Celery.
- no-execv patch added (http://bugs.python.org/issue8713)

192
Doc/conf.py Normal file
View File

@ -0,0 +1,192 @@
# -*- coding: utf-8 -*-
#
# multiprocessing documentation build configuration file, created by
# sphinx-quickstart on Wed Nov 26 12:47:00 2008.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# The contents of this file are pickled, so don't put values in the namespace
# that aren't pickleable (module imports are okay, they're removed automatically).
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If your extensions are in another directory, add it here. If the directory
# is relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
#sys.path.append(os.path.abspath('.'))
# General configuration
# ---------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'multiprocessing'
copyright = u'2008, Python Software Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
import os
import sys
sys.path.insert(0, os.path.join(os.getcwd(), os.pardir))
import billiard
#
# The short X.Y version.
version = billiard.__version__
# The full version, including alpha/beta/rc tags.
release = billiard.__version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
exclude_trees = ['build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# Options for HTML output
# -----------------------
# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
html_style = 'default.css'
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_use_modindex = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, the reST sources are included in the HTML build as _sources/<name>.
#html_copy_source = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'multiprocessingdoc'
# Options for LaTeX output
# ------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, document class [howto/manual]).
latex_documents = [
('index', 'multiprocessing.tex', ur'multiprocessing Documentation',
ur'Python Software Foundation', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_use_modindex = True

40
Doc/glossary.rst Normal file
View File

@ -0,0 +1,40 @@
.. _glossary:
********
Glossary
********
.. glossary::
bytecode
Python source code is compiled into bytecode, the internal representation
of a Python program in the interpreter. The bytecode is also cached in
``.pyc`` and ``.pyo`` files so that executing the same file is faster the
second time (recompilation from source to bytecode can be avoided). This
"intermediate language" is said to run on a :term:`virtual machine`
that executes the machine code corresponding to each bytecode.
CPython
The canonical implementation of the Python programming language. The
term "CPython" is used in contexts when necessary to distinguish this
implementation from others such as Jython or IronPython.
GIL
See :term:`global interpreter lock`.
global interpreter lock
The lock used by Python threads to assure that only one thread
executes in the :term:`CPython` :term:`virtual machine` at a time.
This simplifies the CPython implementation by assuring that no two
processes can access the same memory at the same time. Locking the
entire interpreter makes it easier for the interpreter to be
multi-threaded, at the expense of much of the parallelism afforded by
multi-processor machines. Efforts have been made in the past to
create a "free-threaded" interpreter (one which locks shared data at a
much finer granularity), but so far none have been successful because
performance suffered in the common single-processor case.
virtual machine
A computer defined entirely in software. Python's virtual machine
executes the :term:`bytecode` emitted by the bytecode compiler.

2
Doc/includes/__init__.py Normal file
View File

@ -0,0 +1,2 @@
# package

View File

@ -0,0 +1,238 @@
#
# Simple benchmarks for the multiprocessing package
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
import time, sys, multiprocessing, threading, Queue, gc
if sys.platform == 'win32':
_timer = time.clock
else:
_timer = time.time
delta = 1
#### TEST_QUEUESPEED
def queuespeed_func(q, c, iterations):
a = '0' * 256
c.acquire()
c.notify()
c.release()
for i in xrange(iterations):
q.put(a)
q.put('STOP')
def test_queuespeed(Process, q, c):
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
p = Process(target=queuespeed_func, args=(q, c, iterations))
c.acquire()
p.start()
c.wait()
c.release()
result = None
t = _timer()
while result != 'STOP':
result = q.get()
elapsed = _timer() - t
p.join()
print iterations, 'objects passed through the queue in', elapsed, 'seconds'
print 'average number/sec:', iterations/elapsed
#### TEST_PIPESPEED
def pipe_func(c, cond, iterations):
a = '0' * 256
cond.acquire()
cond.notify()
cond.release()
for i in xrange(iterations):
c.send(a)
c.send('STOP')
def test_pipespeed():
c, d = multiprocessing.Pipe()
cond = multiprocessing.Condition()
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
p = multiprocessing.Process(target=pipe_func,
args=(d, cond, iterations))
cond.acquire()
p.start()
cond.wait()
cond.release()
result = None
t = _timer()
while result != 'STOP':
result = c.recv()
elapsed = _timer() - t
p.join()
print iterations, 'objects passed through connection in',elapsed,'seconds'
print 'average number/sec:', iterations/elapsed
#### TEST_SEQSPEED
def test_seqspeed(seq):
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
t = _timer()
for i in xrange(iterations):
a = seq[5]
elapsed = _timer()-t
print iterations, 'iterations in', elapsed, 'seconds'
print 'average number/sec:', iterations/elapsed
#### TEST_LOCK
def test_lockspeed(l):
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
t = _timer()
for i in xrange(iterations):
l.acquire()
l.release()
elapsed = _timer()-t
print iterations, 'iterations in', elapsed, 'seconds'
print 'average number/sec:', iterations/elapsed
#### TEST_CONDITION
def conditionspeed_func(c, N):
c.acquire()
c.notify()
for i in xrange(N):
c.wait()
c.notify()
c.release()
def test_conditionspeed(Process, c):
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
c.acquire()
p = Process(target=conditionspeed_func, args=(c, iterations))
p.start()
c.wait()
t = _timer()
for i in xrange(iterations):
c.notify()
c.wait()
elapsed = _timer()-t
c.release()
p.join()
print iterations * 2, 'waits in', elapsed, 'seconds'
print 'average number/sec:', iterations * 2 / elapsed
####
def test():
manager = multiprocessing.Manager()
gc.disable()
print '\n\t######## testing Queue.Queue\n'
test_queuespeed(threading.Thread, Queue.Queue(),
threading.Condition())
print '\n\t######## testing multiprocessing.Queue\n'
test_queuespeed(multiprocessing.Process, multiprocessing.Queue(),
multiprocessing.Condition())
print '\n\t######## testing Queue managed by server process\n'
test_queuespeed(multiprocessing.Process, manager.Queue(),
manager.Condition())
print '\n\t######## testing multiprocessing.Pipe\n'
test_pipespeed()
print
print '\n\t######## testing list\n'
test_seqspeed(range(10))
print '\n\t######## testing list managed by server process\n'
test_seqspeed(manager.list(range(10)))
print '\n\t######## testing Array("i", ..., lock=False)\n'
test_seqspeed(multiprocessing.Array('i', range(10), lock=False))
print '\n\t######## testing Array("i", ..., lock=True)\n'
test_seqspeed(multiprocessing.Array('i', range(10), lock=True))
print
print '\n\t######## testing threading.Lock\n'
test_lockspeed(threading.Lock())
print '\n\t######## testing threading.RLock\n'
test_lockspeed(threading.RLock())
print '\n\t######## testing multiprocessing.Lock\n'
test_lockspeed(multiprocessing.Lock())
print '\n\t######## testing multiprocessing.RLock\n'
test_lockspeed(multiprocessing.RLock())
print '\n\t######## testing lock managed by server process\n'
test_lockspeed(manager.Lock())
print '\n\t######## testing rlock managed by server process\n'
test_lockspeed(manager.RLock())
print
print '\n\t######## testing threading.Condition\n'
test_conditionspeed(threading.Thread, threading.Condition())
print '\n\t######## testing multiprocessing.Condition\n'
test_conditionspeed(multiprocessing.Process, multiprocessing.Condition())
print '\n\t######## testing condition managed by a server process\n'
test_conditionspeed(multiprocessing.Process, manager.Condition())
gc.enable()
if __name__ == '__main__':
multiprocessing.freeze_support()
test()

101
Doc/includes/mp_newtype.py Normal file
View File

@ -0,0 +1,101 @@
#
# This module shows how to use arbitrary callables with a subclass of
# `BaseManager`.
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
from multiprocessing import freeze_support
from multiprocessing.managers import BaseManager, BaseProxy
import operator
##
class Foo(object):
def f(self):
print 'you called Foo.f()'
def g(self):
print 'you called Foo.g()'
def _h(self):
print 'you called Foo._h()'
# A simple generator function
def baz():
for i in xrange(10):
yield i*i
# Proxy type for generator objects
class GeneratorProxy(BaseProxy):
_exposed_ = ('next', '__next__')
def __iter__(self):
return self
def next(self):
return self._callmethod('next')
def __next__(self):
return self._callmethod('__next__')
# Function to return the operator module
def get_operator_module():
return operator
##
class MyManager(BaseManager):
pass
# register the Foo class; make `f()` and `g()` accessible via proxy
MyManager.register('Foo1', Foo)
# register the Foo class; make `g()` and `_h()` accessible via proxy
MyManager.register('Foo2', Foo, exposed=('g', '_h'))
# register the generator function baz; use `GeneratorProxy` to make proxies
MyManager.register('baz', baz, proxytype=GeneratorProxy)
# register get_operator_module(); make public functions accessible via proxy
MyManager.register('operator', get_operator_module)
##
def test():
manager = MyManager()
manager.start()
print '-' * 20
f1 = manager.Foo1()
f1.f()
f1.g()
assert not hasattr(f1, '_h')
assert sorted(f1._exposed_) == sorted(['f', 'g'])
print '-' * 20
f2 = manager.Foo2()
f2.g()
f2._h()
assert not hasattr(f2, 'f')
assert sorted(f2._exposed_) == sorted(['g', '_h'])
print '-' * 20
it = manager.baz()
for i in it:
print '<%d>' % i,
print
print '-' * 20
op = manager.operator()
print 'op.add(23, 45) =', op.add(23, 45)
print 'op.pow(2, 94) =', op.pow(2, 94)
print 'op.getslice(range(10), 2, 6) =', op.getslice(range(10), 2, 6)
print 'op.repeat(range(5), 3) =', op.repeat(range(5), 3)
print 'op._exposed_ =', op._exposed_
##
if __name__ == '__main__':
freeze_support()
test()

314
Doc/includes/mp_pool.py Normal file
View File

@ -0,0 +1,314 @@
#
# A test of `multiprocessing.Pool` class
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
import multiprocessing
import time
import random
import sys
#
# Functions used by test code
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' % (
multiprocessing.current_process().name,
func.__name__, args, result
)
def calculatestar(args):
return calculate(*args)
def mul(a, b):
time.sleep(0.5*random.random())
return a * b
def plus(a, b):
time.sleep(0.5*random.random())
return a + b
def f(x):
return 1.0 / (x-5.0)
def pow3(x):
return x**3
def noop(x):
pass
#
# Test code
#
def test():
print 'cpu_count() = %d\n' % multiprocessing.cpu_count()
#
# Create pool
#
PROCESSES = 4
print 'Creating pool with %d processes\n' % PROCESSES
pool = multiprocessing.Pool(PROCESSES)
print 'pool = %s' % pool
print
#
# Tests
#
TASKS = [(mul, (i, 7)) for i in range(10)] + \
[(plus, (i, 8)) for i in range(10)]
results = [pool.apply_async(calculate, t) for t in TASKS]
imap_it = pool.imap(calculatestar, TASKS)
imap_unordered_it = pool.imap_unordered(calculatestar, TASKS)
print 'Ordered results using pool.apply_async():'
for r in results:
print '\t', r.get()
print
print 'Ordered results using pool.imap():'
for x in imap_it:
print '\t', x
print
print 'Unordered results using pool.imap_unordered():'
for x in imap_unordered_it:
print '\t', x
print
print 'Ordered results using pool.map() --- will block till complete:'
for x in pool.map(calculatestar, TASKS):
print '\t', x
print
#
# Simple benchmarks
#
N = 100000
print 'def pow3(x): return x**3'
t = time.time()
A = map(pow3, xrange(N))
print '\tmap(pow3, xrange(%d)):\n\t\t%s seconds' % \
(N, time.time() - t)
t = time.time()
B = pool.map(pow3, xrange(N))
print '\tpool.map(pow3, xrange(%d)):\n\t\t%s seconds' % \
(N, time.time() - t)
t = time.time()
C = list(pool.imap(pow3, xrange(N), chunksize=N//8))
print '\tlist(pool.imap(pow3, xrange(%d), chunksize=%d)):\n\t\t%s' \
' seconds' % (N, N//8, time.time() - t)
assert A == B == C, (len(A), len(B), len(C))
print
L = [None] * 1000000
print 'def noop(x): pass'
print 'L = [None] * 1000000'
t = time.time()
A = map(noop, L)
print '\tmap(noop, L):\n\t\t%s seconds' % \
(time.time() - t)
t = time.time()
B = pool.map(noop, L)
print '\tpool.map(noop, L):\n\t\t%s seconds' % \
(time.time() - t)
t = time.time()
C = list(pool.imap(noop, L, chunksize=len(L)//8))
print '\tlist(pool.imap(noop, L, chunksize=%d)):\n\t\t%s seconds' % \
(len(L)//8, time.time() - t)
assert A == B == C, (len(A), len(B), len(C))
print
del A, B, C, L
#
# Test error handling
#
print 'Testing error handling:'
try:
print pool.apply(f, (5,))
except ZeroDivisionError:
print '\tGot ZeroDivisionError as expected from pool.apply()'
else:
raise AssertionError, 'expected ZeroDivisionError'
try:
print pool.map(f, range(10))
except ZeroDivisionError:
print '\tGot ZeroDivisionError as expected from pool.map()'
else:
raise AssertionError, 'expected ZeroDivisionError'
try:
print list(pool.imap(f, range(10)))
except ZeroDivisionError:
print '\tGot ZeroDivisionError as expected from list(pool.imap())'
else:
raise AssertionError, 'expected ZeroDivisionError'
it = pool.imap(f, range(10))
for i in range(10):
try:
x = it.next()
except ZeroDivisionError:
if i == 5:
pass
except StopIteration:
break
else:
if i == 5:
raise AssertionError, 'expected ZeroDivisionError'
assert i == 9
print '\tGot ZeroDivisionError as expected from IMapIterator.next()'
print
#
# Testing timeouts
#
print 'Testing ApplyResult.get() with timeout:',
res = pool.apply_async(calculate, TASKS[0])
while 1:
sys.stdout.flush()
try:
sys.stdout.write('\n\t%s' % res.get(0.02))
break
except multiprocessing.TimeoutError:
sys.stdout.write('.')
print
print
print 'Testing IMapIterator.next() with timeout:',
it = pool.imap(calculatestar, TASKS)
while 1:
sys.stdout.flush()
try:
sys.stdout.write('\n\t%s' % it.next(0.02))
except StopIteration:
break
except multiprocessing.TimeoutError:
sys.stdout.write('.')
print
print
#
# Testing callback
#
print 'Testing callback:'
A = []
B = [56, 0, 1, 8, 27, 64, 125, 216, 343, 512, 729]
r = pool.apply_async(mul, (7, 8), callback=A.append)
r.wait()
r = pool.map_async(pow3, range(10), callback=A.extend)
r.wait()
if A == B:
print '\tcallbacks succeeded\n'
else:
print '\t*** callbacks failed\n\t\t%s != %s\n' % (A, B)
#
# Check there are no outstanding tasks
#
assert not pool._cache, 'cache = %r' % pool._cache
#
# Check close() methods
#
print 'Testing close():'
for worker in pool._pool:
assert worker.is_alive()
result = pool.apply_async(time.sleep, [0.5])
pool.close()
pool.join()
assert result.get() is None
for worker in pool._pool:
assert not worker.is_alive()
print '\tclose() succeeded\n'
#
# Check terminate() method
#
print 'Testing terminate():'
pool = multiprocessing.Pool(2)
DELTA = 0.1
ignore = pool.apply(pow3, [2])
results = [pool.apply_async(time.sleep, [DELTA]) for i in range(100)]
pool.terminate()
pool.join()
for worker in pool._pool:
assert not worker.is_alive()
print '\tterminate() succeeded\n'
#
# Check garbage collection
#
print 'Testing garbage collection:'
pool = multiprocessing.Pool(2)
DELTA = 0.1
processes = pool._pool
ignore = pool.apply(pow3, [2])
results = [pool.apply_async(time.sleep, [DELTA]) for i in range(100)]
results = pool = None
time.sleep(DELTA * 2)
for worker in processes:
assert not worker.is_alive()
print '\tgarbage collection succeeded\n'
if __name__ == '__main__':
multiprocessing.freeze_support()
assert len(sys.argv) in (1, 2)
if len(sys.argv) == 1 or sys.argv[1] == 'processes':
print ' Using processes '.center(79, '-')
elif sys.argv[1] == 'threads':
print ' Using threads '.center(79, '-')
import multiprocessing.dummy as multiprocessing
else:
print 'Usage:\n\t%s [processes | threads]' % sys.argv[0]
raise SystemExit(2)
test()

View File

@ -0,0 +1,276 @@
#
# A test file for the `multiprocessing` package
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
import time, sys, random
from Queue import Empty
import multiprocessing # may get overwritten
#### TEST_VALUE
def value_func(running, mutex):
random.seed()
time.sleep(random.random()*4)
mutex.acquire()
print '\n\t\t\t' + str(multiprocessing.current_process()) + ' has finished'
running.value -= 1
mutex.release()
def test_value():
TASKS = 10
running = multiprocessing.Value('i', TASKS)
mutex = multiprocessing.Lock()
for i in range(TASKS):
p = multiprocessing.Process(target=value_func, args=(running, mutex))
p.start()
while running.value > 0:
time.sleep(0.08)
mutex.acquire()
print running.value,
sys.stdout.flush()
mutex.release()
print
print 'No more running processes'
#### TEST_QUEUE
def queue_func(queue):
for i in range(30):
time.sleep(0.5 * random.random())
queue.put(i*i)
queue.put('STOP')
def test_queue():
q = multiprocessing.Queue()
p = multiprocessing.Process(target=queue_func, args=(q,))
p.start()
o = None
while o != 'STOP':
try:
o = q.get(timeout=0.3)
print o,
sys.stdout.flush()
except Empty:
print 'TIMEOUT'
print
#### TEST_CONDITION
def condition_func(cond):
cond.acquire()
print '\t' + str(cond)
time.sleep(2)
print '\tchild is notifying'
print '\t' + str(cond)
cond.notify()
cond.release()
def test_condition():
cond = multiprocessing.Condition()
p = multiprocessing.Process(target=condition_func, args=(cond,))
print cond
cond.acquire()
print cond
cond.acquire()
print cond
p.start()
print 'main is waiting'
cond.wait()
print 'main has woken up'
print cond
cond.release()
print cond
cond.release()
p.join()
print cond
#### TEST_SEMAPHORE
def semaphore_func(sema, mutex, running):
sema.acquire()
mutex.acquire()
running.value += 1
print running.value, 'tasks are running'
mutex.release()
random.seed()
time.sleep(random.random()*2)
mutex.acquire()
running.value -= 1
print '%s has finished' % multiprocessing.current_process()
mutex.release()
sema.release()
def test_semaphore():
sema = multiprocessing.Semaphore(3)
mutex = multiprocessing.RLock()
running = multiprocessing.Value('i', 0)
processes = [
multiprocessing.Process(target=semaphore_func,
args=(sema, mutex, running))
for i in range(10)
]
for p in processes:
p.start()
for p in processes:
p.join()
#### TEST_JOIN_TIMEOUT
def join_timeout_func():
print '\tchild sleeping'
time.sleep(5.5)
print '\n\tchild terminating'
def test_join_timeout():
p = multiprocessing.Process(target=join_timeout_func)
p.start()
print 'waiting for process to finish'
while 1:
p.join(timeout=1)
if not p.is_alive():
break
print '.',
sys.stdout.flush()
#### TEST_EVENT
def event_func(event):
print '\t%r is waiting' % multiprocessing.current_process()
event.wait()
print '\t%r has woken up' % multiprocessing.current_process()
def test_event():
event = multiprocessing.Event()
processes = [multiprocessing.Process(target=event_func, args=(event,))
for i in range(5)]
for p in processes:
p.start()
print 'main is sleeping'
time.sleep(2)
print 'main is setting event'
event.set()
for p in processes:
p.join()
#### TEST_SHAREDVALUES
def sharedvalues_func(values, arrays, shared_values, shared_arrays):
for i in range(len(values)):
v = values[i][1]
sv = shared_values[i].value
assert v == sv
for i in range(len(values)):
a = arrays[i][1]
sa = list(shared_arrays[i][:])
assert a == sa
print 'Tests passed'
def test_sharedvalues():
values = [
('i', 10),
('h', -2),
('d', 1.25)
]
arrays = [
('i', range(100)),
('d', [0.25 * i for i in range(100)]),
('H', range(1000))
]
shared_values = [multiprocessing.Value(id, v) for id, v in values]
shared_arrays = [multiprocessing.Array(id, a) for id, a in arrays]
p = multiprocessing.Process(
target=sharedvalues_func,
args=(values, arrays, shared_values, shared_arrays)
)
p.start()
p.join()
assert p.exitcode == 0
####
def test(namespace=multiprocessing):
global multiprocessing
multiprocessing = namespace
for func in [ test_value, test_queue, test_condition,
test_semaphore, test_join_timeout, test_event,
test_sharedvalues ]:
print '\n\t######## %s\n' % func.__name__
func()
ignore = multiprocessing.active_children() # cleanup any old processes
if hasattr(multiprocessing, '_debug_info'):
info = multiprocessing._debug_info()
if info:
print info
raise ValueError, 'there should be no positive refcounts left'
if __name__ == '__main__':
multiprocessing.freeze_support()
assert len(sys.argv) in (1, 2)
if len(sys.argv) == 1 or sys.argv[1] == 'processes':
print ' Using processes '.center(79, '-')
namespace = multiprocessing
elif sys.argv[1] == 'manager':
print ' Using processes and a manager '.center(79, '-')
namespace = multiprocessing.Manager()
namespace.Process = multiprocessing.Process
namespace.current_process = multiprocessing.current_process
namespace.active_children = multiprocessing.active_children
elif sys.argv[1] == 'threads':
print ' Using threads '.center(79, '-')
import multiprocessing.dummy as namespace
else:
print 'Usage:\n\t%s [processes | manager | threads]' % sys.argv[0]
raise SystemExit, 2
test(namespace)

View File

@ -0,0 +1,70 @@
#
# Example where a pool of http servers share a single listening socket
#
# On Windows this module depends on the ability to pickle a socket
# object so that the worker processes can inherit a copy of the server
# object. (We import `multiprocessing.reduction` to enable this pickling.)
#
# Not sure if we should synchronize access to `socket.accept()` method by
# using a process-shared lock -- does not seem to be necessary.
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
import os
import sys
from multiprocessing import Process, current_process, freeze_support
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
if sys.platform == 'win32':
import multiprocessing.reduction # make sockets pickable/inheritable
def note(format, *args):
sys.stderr.write('[%s]\t%s\n' % (current_process().name, format%args))
class RequestHandler(SimpleHTTPRequestHandler):
# we override log_message() to show which process is handling the request
def log_message(self, format, *args):
note(format, *args)
def serve_forever(server):
note('starting server')
try:
server.serve_forever()
except KeyboardInterrupt:
pass
def runpool(address, number_of_processes):
# create a single server object -- children will each inherit a copy
server = HTTPServer(address, RequestHandler)
# create child processes to act as workers
for i in range(number_of_processes-1):
Process(target=serve_forever, args=(server,)).start()
# main process also acts as a worker
serve_forever(server)
def test():
DIR = os.path.join(os.path.dirname(__file__), '..')
ADDRESS = ('localhost', 8000)
NUMBER_OF_PROCESSES = 4
print 'Serving at http://%s:%d using %d worker processes' % \
(ADDRESS[0], ADDRESS[1], NUMBER_OF_PROCESSES)
print 'To exit press Ctrl-' + ['C', 'Break'][sys.platform=='win32']
os.chdir(DIR)
runpool(ADDRESS, NUMBER_OF_PROCESSES)
if __name__ == '__main__':
freeze_support()
test()

View File

@ -0,0 +1,90 @@
#
# Simple example which uses a pool of workers to carry out some tasks.
#
# Notice that the results will probably not come out of the output
# queue in the same in the same order as the corresponding tasks were
# put on the input queue. If it is important to get the results back
# in the original order then consider using `Pool.map()` or
# `Pool.imap()` (which will save on the amount of code needed anyway).
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
import time
import random
from multiprocessing import Process, Queue, current_process, freeze_support
#
# Function run by worker processes
#
def worker(input, output):
for func, args in iter(input.get, 'STOP'):
result = calculate(func, args)
output.put(result)
#
# Function used to calculate result
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' % \
(current_process().name, func.__name__, args, result)
#
# Functions referenced by tasks
#
def mul(a, b):
time.sleep(0.5*random.random())
return a * b
def plus(a, b):
time.sleep(0.5*random.random())
return a + b
#
#
#
def test():
NUMBER_OF_PROCESSES = 4
TASKS1 = [(mul, (i, 7)) for i in range(20)]
TASKS2 = [(plus, (i, 8)) for i in range(10)]
# Create queues
task_queue = Queue()
done_queue = Queue()
# Submit tasks
for task in TASKS1:
task_queue.put(task)
# Start worker processes
for i in range(NUMBER_OF_PROCESSES):
Process(target=worker, args=(task_queue, done_queue)).start()
# Get and print results
print 'Unordered results:'
for i in range(len(TASKS1)):
print '\t', done_queue.get()
# Add more tasks using `put()`
for task in TASKS2:
task_queue.put(task)
# Get and print some more results
for i in range(len(TASKS2)):
print '\t', done_queue.get()
# Tell child processes to stop
for i in range(NUMBER_OF_PROCESSES):
task_queue.put('STOP')
if __name__ == '__main__':
freeze_support()
test()

22
Doc/index.rst Normal file
View File

@ -0,0 +1,22 @@
.. multiprocessing documentation master file, created by sphinx-quickstart on Wed Nov 26 12:47:00 2008.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to multiprocessing's documentation!
===========================================
Contents:
.. toctree::
library/multiprocessing.rst
glossary.rst
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

File diff suppressed because it is too large Load Diff

91
INSTALL.txt Normal file
View File

@ -0,0 +1,91 @@
.. default-role:: literal
================================
Installation of multiprocessing
================================
Versions earlier than Python 2.4 are not supported. If you are using
Python 2.4 then you must install the `ctypes` package (which comes
automatically with Python 2.5). Users of Python 2.4 on Windows
also need to install the `pywin32` package.
On Unix It's highly recommended to use Python 2.5.3 (not yet released) or
apply the ``fork-thread-patch-2`` patch from `Issue 1683
http://bugs.python.org/issue1683`_.
Windows binary builds for Python 2.4 and Python 2.5 are available at
http://pypi.python.org/pypi/multiprocessing
Python 2.6 and newer versions already come with multiprocessing. Although
the stand alone variant of the multiprocessing package is kept compatible
with 2.6, you mustn't install it with Python 2.6.
Otherwise, if you have the correct C compiler setup then the source
distribution can be installed the usual way::
python setup.py install
It should not be necessary to do any editing of `setup.py` if you are
using Windows, Mac OS X or Linux. On other unices it may be necessary
to modify the values of the `macros` dictionary or `libraries` list.
The section to modify reads ::
else:
macros = dict(
HAVE_SEM_OPEN=1,
HAVE_SEM_TIMEDWAIT=1,
HAVE_FD_TRANSFER=1
)
libraries = ['rt']
More details can be found in the comments in `setup.py`.
Note that if you use `HAVE_SEM_OPEN=0` then support for posix
semaphores will not been compiled in, and then many of the functions
in the `processing` namespace like `Lock()`, `Queue()` or will not be
available. However, one can still create a manager using `manager =
processing.Manager()` and then do `lock = manager.Lock()` etc.
Running tests
-------------
To run the test scripts using Python 2.5 do ::
python -m multiprocessing.tests
and on Python 2.4 do ::
python -c "from multiprocessing.tests import main; main()"
The sources also come with a Makefile. To run the unit tests with the
Makefile using Python 2.5 do ::
make test
using another version of Python do ::
make test PYTHON=python2.4
This will run a number of test scripts using both processes and threads.
Running examples
----------------
The make target `examples` runs several example scripts.
Building docs
-------------
To build the standalone documentation you need Sphinx 0.5 and setuptools
0.6c9 or newer. Both are available at http://pypi.python.org/. With
setuptools installed, do ::
sudo easy_install-2.5 "Sphinx>=0.5"
make doc
The docs end up in ``build/sphinx/builder_name``.

29
LICENSE.txt Normal file
View File

@ -0,0 +1,29 @@
Copyright (c) 2006-2008, R Oudkerk and Contributors
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of author nor the names of any contributors may be
used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.

10
MANIFEST.in Normal file
View File

@ -0,0 +1,10 @@
include *.py
include *.txt
include *.rst
include Makefile
recursive-include Lib *.py
recursive-include Modules *.c *.h
recursive-include Doc *.rst *.py
recursive-include funtests *.py
recursive-include requirements *.txt
recursive-include billiard *.py

38
Makefile Normal file
View File

@ -0,0 +1,38 @@
PYTHON=python
flakecheck:
flake8 billiard
flakediag:
-$(MAKE) flakecheck
flakepluscheck:
flakeplus billiard --2.6
flakeplusdiag:
-$(MAKE) flakepluscheck
flakes: flakediag flakeplusdiag
test:
nosetests -xv billiard.tests
cov:
nosetests -xv billiard.tests --with-coverage --cover-html --cover-branch
removepyc:
-find . -type f -a \( -name "*.pyc" -o -name "*$$py.class" \) | xargs rm
-find . -type d -name "__pycache__" | xargs rm -r
gitclean:
git clean -xdn
gitcleanforce:
git clean -xdf
bump_version:
$(PYTHON) extra/release/bump_version.py billiard/__init__.py
distcheck: flakecheck test gitclean
dist: readme docsclean gitcleanforce removepyc

View File

@ -0,0 +1,685 @@
/*
* Definition of a `Connection` type.
* Used by `socket_connection.c` and `pipe_connection.c`.
*
* connection.h
*
* Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt
*/
#ifndef CONNECTION_H
#define CONNECTION_H
/*
* Read/write flags
*/
#define READABLE 1
#define WRITABLE 2
#define CHECK_READABLE(self) \
if (!(self->flags & READABLE)) { \
PyErr_SetString(PyExc_IOError, "connection is write-only"); \
return NULL; \
}
#define CHECK_WRITABLE(self) \
if (!(self->flags & WRITABLE)) { \
PyErr_SetString(PyExc_IOError, "connection is read-only"); \
return NULL; \
}
/*
* Externally implemented functions
*/
extern void _Billiard_setblocking(int fd, int blocking);
extern ssize_t _Billiard_conn_send_offset(HANDLE fd, char *string, Py_ssize_t len, Py_ssize_t offset);
/*
* Allocation and deallocation
*/
static PyObject *
Billiard_connection_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
BilliardConnectionObject *self;
HANDLE handle;
BOOL readable = TRUE, writable = TRUE;
static char *kwlist[] = {"handle", "readable", "writable", NULL};
if (!PyArg_ParseTupleAndKeywords(args, kwds, F_HANDLE "|ii", kwlist,
&handle, &readable, &writable))
return NULL;
if (handle == INVALID_HANDLE_VALUE || (Py_ssize_t)handle < 0) {
PyErr_Format(PyExc_IOError, "invalid handle %zd",
(Py_ssize_t)handle);
return NULL;
}
if (!readable && !writable) {
PyErr_SetString(PyExc_ValueError,
"either readable or writable must be true");
return NULL;
}
self = PyObject_New(BilliardConnectionObject, type);
if (self == NULL)
return NULL;
self->weakreflist = NULL;
self->handle = handle;
self->flags = 0;
if (readable)
self->flags |= READABLE;
if (writable)
self->flags |= WRITABLE;
assert(self->flags >= 1 && self->flags <= 3);
return (PyObject*)self;
}
static void
Billiard_connection_dealloc(BilliardConnectionObject* self)
{
if (self->weakreflist != NULL)
PyObject_ClearWeakRefs((PyObject*)self);
if (self->handle != INVALID_HANDLE_VALUE) {
Py_BEGIN_ALLOW_THREADS
CLOSE(self->handle);
Py_END_ALLOW_THREADS
}
PyObject_Del(self);
}
/*
* Functions for transferring buffers
*/
static PyObject *
Billiard_connection_sendbytes(BilliardConnectionObject *self, PyObject *args)
{
char *buffer;
Py_ssize_t length, offset=0, size=PY_SSIZE_T_MIN;
int res;
if (!PyArg_ParseTuple(args, F_RBUFFER "#|" F_PY_SSIZE_T F_PY_SSIZE_T,
&buffer, &length, &offset, &size))
return NULL;
CHECK_WRITABLE(self);
if (offset < 0) {
PyErr_SetString(PyExc_ValueError, "offset is negative");
return NULL;
}
if (length < offset) {
PyErr_SetString(PyExc_ValueError, "buffer length < offset");
return NULL;
}
if (size == PY_SSIZE_T_MIN) {
size = length - offset;
} else {
if (size < 0) {
PyErr_SetString(PyExc_ValueError, "size is negative");
return NULL;
}
if (offset + size > length) {
PyErr_SetString(PyExc_ValueError,
"buffer length < offset + size");
return NULL;
}
}
res = Billiard_conn_send_string(self, buffer + offset, size);
if (res < 0) {
if (PyErr_Occurred())
return NULL;
else
return Billiard_SetError(PyExc_IOError, res);
}
Py_RETURN_NONE;
}
static PyObject *
Billiard_connection_recvbytes(BilliardConnectionObject *self, PyObject *args)
{
char *freeme = NULL;
Py_ssize_t res, maxlength = PY_SSIZE_T_MAX;
PyObject *result = NULL;
if (!PyArg_ParseTuple(args, "|" F_PY_SSIZE_T, &maxlength))
return NULL;
CHECK_READABLE(self);
if (maxlength < 0) {
PyErr_SetString(PyExc_ValueError, "maxlength < 0");
return NULL;
}
res = Billiard_conn_recv_string(self, self->buffer, CONNECTION_BUFFER_SIZE,
&freeme, maxlength);
if (res < 0) {
if (res == MP_BAD_MESSAGE_LENGTH) {
if ((self->flags & WRITABLE) == 0) {
Py_BEGIN_ALLOW_THREADS
CLOSE(self->handle);
Py_END_ALLOW_THREADS
self->handle = INVALID_HANDLE_VALUE;
} else {
self->flags = WRITABLE;
}
}
Billiard_SetError(PyExc_IOError, res);
} else {
if (freeme == NULL) {
result = PyString_FromStringAndSize(self->buffer, res);
} else {
result = PyString_FromStringAndSize(freeme, res);
PyMem_Free(freeme);
}
}
return result;
}
#ifdef HAS_NEW_PY_BUFFER
static PyObject *
Billiard_connection_recvbytes_into(BilliardConnectionObject *self, PyObject *args)
{
char *freeme = NULL, *buffer = NULL;
Py_ssize_t res, length, offset = 0;
PyObject *result = NULL;
Py_buffer pbuf;
CHECK_READABLE(self);
if (!PyArg_ParseTuple(args, "w*|" F_PY_SSIZE_T,
&pbuf, &offset))
return NULL;
buffer = pbuf.buf;
length = pbuf.len;
if (offset < 0) {
PyErr_SetString(PyExc_ValueError, "negative offset");
goto _error;
}
if (offset > length) {
PyErr_SetString(PyExc_ValueError, "offset too large");
goto _error;
}
res = Billiard_conn_recv_string(self, buffer+offset, length-offset,
&freeme, PY_SSIZE_T_MAX);
if (res < 0) {
if (res == MP_BAD_MESSAGE_LENGTH) {
if ((self->flags & WRITABLE) == 0) {
Py_BEGIN_ALLOW_THREADS
CLOSE(self->handle);
Py_END_ALLOW_THREADS
self->handle = INVALID_HANDLE_VALUE;
} else {
self->flags = WRITABLE;
}
}
Billiard_SetError(PyExc_IOError, res);
} else {
if (freeme == NULL) {
result = PyInt_FromSsize_t(res);
} else {
result = PyObject_CallFunction(Billiard_BufferTooShort,
F_RBUFFER "#",
freeme, res);
PyMem_Free(freeme);
if (result) {
PyErr_SetObject(Billiard_BufferTooShort, result);
Py_DECREF(result);
}
goto _error;
}
}
_cleanup:
PyBuffer_Release(&pbuf);
return result;
_error:
result = NULL;
goto _cleanup;
}
# else /* old buffer protocol */
static PyObject *
Billiard_connection_recvbytes_into(BilliardConnectionObject *self, PyObject *args)
{
char *freeme = NULL, *buffer = NULL;
Py_ssize_t length = 0, res, offset = 0;
PyObject *result = NULL;
CHECK_READABLE(self);
if (!PyArg_ParseTuple(args, "w#|", F_PY_SSIZE_T,
&buffer, &length, &offset))
return NULL;
if (offset < 0) {
PyErr_SetString(PyExc_ValueError, "negative offset");
goto _error;
}
if (offset > 0) {
PyErr_SetString(PyExc_ValueError, "offset out of bounds");
goto _error;
}
res = Billiard_conn_recv_string(self, buffer+offset, length-offset,
&freeme, PY_SSIZE_T_MAX);
if (res < 0) {
if (res == MP_BAD_MESSAGE_LENGTH) {
if ((self->flags & WRITABLE) == 0) {
Py_BEGIN_ALLOW_THREADS
CLOSE(self->handle);
Py_END_ALLOW_THREADS
self->handle = INVALID_HANDLE_VALUE;
} else {
self->flags = WRITABLE;
}
}
Billiard_SetError(PyExc_IOError, res);
} else {
if (freeme == NULL) {
result = PyInt_FromSsize_t(res);
} else {
result = PyObject_CallFunction(Billiard_BufferTooShort,
F_RBUFFER "#",
freeme, res);
PyMem_Free(freeme);
if (result) {
PyErr_SetObject(Billiard_BufferTooShort, result);
Py_DECREF(result);
}
goto _error;
}
}
_cleanup:
return result;
_error:
result = NULL;
goto _cleanup;
}
# endif /* buffer */
/*
* Functions for transferring objects
*/
static PyObject *
Billiard_connection_send_offset(BilliardConnectionObject *self, PyObject *args)
{
Py_buffer view;
Py_ssize_t len = 0;
Py_ssize_t offset = 0;
ssize_t written = 0;
char *buf = NULL;
if (!PyArg_ParseTuple(args, "s*n", &view, &offset))
return NULL;
len = view.len;
buf = view.buf;
// CHECK_WRITABLE(self);
if (!(self->flags & WRITABLE)) {
PyErr_SetString(PyExc_IOError, "connection is read-only"); \
goto bail;
}
if (len < 0 || len == 0) {
errno = EINVAL;
PyErr_SetFromErrno(PyExc_OSError);
goto bail;
}
written = _Billiard_conn_send_offset(self->handle, buf, (size_t)len, offset);
if (written < 0) {
Billiard_SetError(NULL, MP_SOCKET_ERROR);
goto bail;
}
PyBuffer_Release(&view);
return PyInt_FromSsize_t((Py_ssize_t)written);
bail:
PyBuffer_Release(&view);
return NULL;
}
static PyObject *
Billiard_connection_send_obj(BilliardConnectionObject *self, PyObject *obj)
{
char *buffer;
int res;
Py_ssize_t length;
PyObject *pickled_string = NULL;
CHECK_WRITABLE(self);
pickled_string = PyObject_CallFunctionObjArgs(
Billiard_pickle_dumps, obj, Billiard_pickle_protocol, NULL
);
if (!pickled_string)
goto failure;
if (PyString_AsStringAndSize(pickled_string, &buffer, &length) < 0)
goto failure;
res = Billiard_conn_send_string(self, buffer, (int)length);
if (res != MP_SUCCESS) {
Billiard_SetError(NULL, res);
goto failure;
}
Py_XDECREF(pickled_string);
Py_RETURN_NONE;
failure:
Py_XDECREF(pickled_string);
return NULL;
}
static PyObject *
Billiard_connection_setblocking(BilliardConnectionObject *self, PyObject *arg)
{
_Billiard_setblocking((int)self->handle, PyInt_AS_LONG(arg));
Py_RETURN_NONE;
}
static PyObject *
Billiard_connection_recv_payload(BilliardConnectionObject *self)
{
char *freeme = NULL;
Py_ssize_t res;
PyObject *view = NULL;
CHECK_READABLE(self);
res = Billiard_conn_recv_string(self, self->buffer, CONNECTION_BUFFER_SIZE,
&freeme, PY_SSIZE_T_MAX);
if (res < 0) {
if (res == MP_BAD_MESSAGE_LENGTH) {
if ((self->flags & WRITABLE) == 0) {
Py_BEGIN_ALLOW_THREADS
CLOSE(self->handle);
Py_END_ALLOW_THREADS
self->handle = INVALID_HANDLE_VALUE;
} else {
self->flags = WRITABLE;
}
}
Billiard_SetError(PyExc_IOError, res);
goto error;
} else {
if (freeme == NULL) {
view = PyBuffer_FromMemory(self->buffer, res);
} else {
view = PyString_FromStringAndSize(freeme, res);
PyMem_Free(freeme);
}
}
//Py_XDECREF(view);
return view;
error:
return NULL;
}
static PyObject *
Billiard_connection_recv_obj(BilliardConnectionObject *self)
{
char *freeme = NULL;
Py_ssize_t res;
PyObject *temp = NULL, *result = NULL;
CHECK_READABLE(self);
res = Billiard_conn_recv_string(self, self->buffer, CONNECTION_BUFFER_SIZE,
&freeme, PY_SSIZE_T_MAX);
if (res < 0) {
if (res == MP_BAD_MESSAGE_LENGTH) {
if ((self->flags & WRITABLE) == 0) {
Py_BEGIN_ALLOW_THREADS
CLOSE(self->handle);
Py_END_ALLOW_THREADS
self->handle = INVALID_HANDLE_VALUE;
} else {
self->flags = WRITABLE;
}
}
Billiard_SetError(PyExc_IOError, res);
} else {
if (freeme == NULL) {
temp = PyString_FromStringAndSize(self->buffer, res);
} else {
temp = PyString_FromStringAndSize(freeme, res);
PyMem_Free(freeme);
}
}
if (temp)
result = PyObject_CallFunctionObjArgs(Billiard_pickle_loads,
temp, NULL);
Py_XDECREF(temp);
return result;
}
/*
* Other functions
*/
static PyObject *
Billiard_connection_poll(BilliardConnectionObject *self, PyObject *args)
{
PyObject *timeout_obj = NULL;
double timeout = 0.0;
int res;
CHECK_READABLE(self);
if (!PyArg_ParseTuple(args, "|O", &timeout_obj))
return NULL;
if (timeout_obj == NULL) {
timeout = 0.0;
} else if (timeout_obj == Py_None) {
timeout = -1.0; /* block forever */
} else {
timeout = PyFloat_AsDouble(timeout_obj);
if (PyErr_Occurred())
return NULL;
if (timeout < 0.0)
timeout = 0.0;
}
Py_BEGIN_ALLOW_THREADS
res = Billiard_conn_poll(self, timeout, _save);
Py_END_ALLOW_THREADS
switch (res) {
case TRUE:
Py_RETURN_TRUE;
case FALSE:
Py_RETURN_FALSE;
default:
return Billiard_SetError(PyExc_IOError, res);
}
}
static PyObject *
Billiard_connection_fileno(BilliardConnectionObject* self)
{
if (self->handle == INVALID_HANDLE_VALUE) {
PyErr_SetString(PyExc_IOError, "handle is invalid");
return NULL;
}
return PyInt_FromLong((long)self->handle);
}
static PyObject *
Billiard_connection_close(BilliardConnectionObject *self)
{
if (self->handle != INVALID_HANDLE_VALUE) {
Py_BEGIN_ALLOW_THREADS
CLOSE(self->handle);
Py_END_ALLOW_THREADS
self->handle = INVALID_HANDLE_VALUE;
}
Py_RETURN_NONE;
}
static PyObject *
Billiard_connection_repr(BilliardConnectionObject *self)
{
static char *conn_type[] = {"read-only", "write-only", "read-write"};
assert(self->flags >= 1 && self->flags <= 3);
return FROM_FORMAT("<%s %s, handle %zd>",
conn_type[self->flags - 1],
CONNECTION_NAME, (Py_ssize_t)self->handle);
}
/*
* Getters and setters
*/
static PyObject *
Billiard_connection_closed(BilliardConnectionObject *self, void *closure)
{
return PyBool_FromLong((long)(self->handle == INVALID_HANDLE_VALUE));
}
static PyObject *
Billiard_connection_readable(BilliardConnectionObject *self, void *closure)
{
return PyBool_FromLong((long)(self->flags & READABLE));
}
static PyObject *
Billiard_connection_writable(BilliardConnectionObject *self, void *closure)
{
return PyBool_FromLong((long)(self->flags & WRITABLE));
}
/*
* Tables
*/
static PyMethodDef Billiard_connection_methods[] = {
{"send_bytes", (PyCFunction)Billiard_connection_sendbytes, METH_VARARGS,
"send the byte data from a readable buffer-like object"},
{"recv_bytes", (PyCFunction)Billiard_connection_recvbytes, METH_VARARGS,
"receive byte data as a string"},
{"recv_bytes_into",(PyCFunction)Billiard_connection_recvbytes_into,METH_VARARGS,
"receive byte data into a writeable buffer-like object\n"
"returns the number of bytes read"},
{"send", (PyCFunction)Billiard_connection_send_obj, METH_O,
"send a (picklable) object"},
{"send_offset", (PyCFunction)Billiard_connection_send_offset, METH_VARARGS,
"send string/buffer (non-blocking)"},
{"recv", (PyCFunction)Billiard_connection_recv_obj, METH_NOARGS,
"receive a (picklable) object"},
{"setblocking",(PyCFunction)Billiard_connection_setblocking, METH_O,
"set socket blocking/non-blocking"},
{"recv_payload", (PyCFunction)Billiard_connection_recv_payload, METH_NOARGS,
"receive raw payload (not unpickled)"},
{"poll", (PyCFunction)Billiard_connection_poll, METH_VARARGS,
"whether there is any input available to be read"},
{"fileno", (PyCFunction)Billiard_connection_fileno, METH_NOARGS,
"file descriptor or handle of the connection"},
{"close", (PyCFunction)Billiard_connection_close, METH_NOARGS,
"close the connection"},
{NULL} /* Sentinel */
};
static PyGetSetDef Billiard_connection_getset[] = {
{"closed", (getter)Billiard_connection_closed, NULL,
"True if the connection is closed", NULL},
{"readable", (getter)Billiard_connection_readable, NULL,
"True if the connection is readable", NULL},
{"writable", (getter)Billiard_connection_writable, NULL,
"True if the connection is writable", NULL},
{NULL}
};
/*
* Connection type
*/
PyDoc_STRVAR(Billiard_connection_doc,
"Connection type whose constructor signature is\n\n"
" Connection(handle, readable=True, writable=True).\n\n"
"The constructor does *not* duplicate the handle.");
PyTypeObject CONNECTION_TYPE = {
PyVarObject_HEAD_INIT(NULL, 0)
/* tp_name */ "_billiard." CONNECTION_NAME,
/* tp_basicsize */ sizeof(BilliardConnectionObject),
/* tp_itemsize */ 0,
/* tp_dealloc */ (destructor)Billiard_connection_dealloc,
/* tp_print */ 0,
/* tp_getattr */ 0,
/* tp_setattr */ 0,
/* tp_compare */ 0,
/* tp_repr */ (reprfunc)Billiard_connection_repr,
/* tp_as_number */ 0,
/* tp_as_sequence */ 0,
/* tp_as_mapping */ 0,
/* tp_hash */ 0,
/* tp_call */ 0,
/* tp_str */ 0,
/* tp_getattro */ 0,
/* tp_setattro */ 0,
/* tp_as_buffer */ 0,
/* tp_flags */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE |
Py_TPFLAGS_HAVE_WEAKREFS,
/* tp_doc */ Billiard_connection_doc,
/* tp_traverse */ 0,
/* tp_clear */ 0,
/* tp_richcompare */ 0,
/* tp_weaklistoffset */ offsetof(BilliardConnectionObject, weakreflist),
/* tp_iter */ 0,
/* tp_iternext */ 0,
/* tp_methods */ Billiard_connection_methods,
/* tp_members */ 0,
/* tp_getset */ Billiard_connection_getset,
/* tp_base */ 0,
/* tp_dict */ 0,
/* tp_descr_get */ 0,
/* tp_descr_set */ 0,
/* tp_dictoffset */ 0,
/* tp_init */ 0,
/* tp_alloc */ 0,
/* tp_new */ Billiard_connection_new,
};
#endif /* CONNECTION_H */

View File

@ -0,0 +1,386 @@
/*
* Extension module used by multiprocessing package
*
* multiprocessing.c
*
* Copyright (c) 2006-2008, R Oudkerk
* Licensed to PSF under a Contributor Agreement.
*/
#include "multiprocessing.h"
#ifdef SCM_RIGHTS
#define HAVE_FD_TRANSFER 1
#else
#define HAVE_FD_TRANSFER 0
#endif
PyObject *create_win32_namespace(void);
PyObject *Billiard_pickle_dumps;
PyObject *Billiard_pickle_loads;
PyObject *Billiard_pickle_protocol;
PyObject *Billiard_BufferTooShort;
/*
* Function which raises exceptions based on error codes
*/
PyObject *
Billiard_SetError(PyObject *Type, int num)
{
switch (num) {
case MP_SUCCESS:
break;
#ifdef MS_WINDOWS
case MP_STANDARD_ERROR:
if (Type == NULL)
Type = PyExc_WindowsError;
PyErr_SetExcFromWindowsErr(Type, 0);
break;
case MP_SOCKET_ERROR:
if (Type == NULL)
Type = PyExc_WindowsError;
PyErr_SetExcFromWindowsErr(Type, WSAGetLastError());
break;
#else /* !MS_WINDOWS */
case MP_STANDARD_ERROR:
case MP_SOCKET_ERROR:
if (Type == NULL)
Type = PyExc_OSError;
PyErr_SetFromErrno(Type);
break;
#endif /* !MS_WINDOWS */
case MP_MEMORY_ERROR:
PyErr_NoMemory();
break;
case MP_END_OF_FILE:
PyErr_SetNone(PyExc_EOFError);
break;
case MP_EARLY_END_OF_FILE:
PyErr_SetString(PyExc_IOError,
"got end of file during message");
break;
case MP_BAD_MESSAGE_LENGTH:
PyErr_SetString(PyExc_IOError, "bad message length");
break;
case MP_EXCEPTION_HAS_BEEN_SET:
break;
default:
PyErr_Format(PyExc_RuntimeError,
"unkown error number %d", num);
}
return NULL;
}
/*
* Windows only
*/
#ifdef MS_WINDOWS
/* On Windows we set an event to signal Ctrl-C; compare with timemodule.c */
HANDLE sigint_event = NULL;
static BOOL WINAPI
ProcessingCtrlHandler(DWORD dwCtrlType)
{
SetEvent(sigint_event);
return FALSE;
}
/*
* Unix only
*/
#else /* !MS_WINDOWS */
#if HAVE_FD_TRANSFER
/* Functions for transferring file descriptors between processes.
Reimplements some of the functionality of the fdcred
module at http://www.mca-ltd.com/resources/fdcred_1.tgz. */
static PyObject *
Billiard_multiprocessing_sendfd(PyObject *self, PyObject *args)
{
int conn, fd, res;
char dummy_char;
char buf[CMSG_SPACE(sizeof(int))];
struct msghdr msg = {0};
struct iovec dummy_iov;
struct cmsghdr *cmsg;
if (!PyArg_ParseTuple(args, "ii", &conn, &fd))
return NULL;
dummy_iov.iov_base = &dummy_char;
dummy_iov.iov_len = 1;
msg.msg_control = buf;
msg.msg_controllen = sizeof(buf);
msg.msg_iov = &dummy_iov;
msg.msg_iovlen = 1;
cmsg = CMSG_FIRSTHDR(&msg);
cmsg->cmsg_level = SOL_SOCKET;
cmsg->cmsg_type = SCM_RIGHTS;
cmsg->cmsg_len = CMSG_LEN(sizeof(int));
msg.msg_controllen = cmsg->cmsg_len;
*(int*)CMSG_DATA(cmsg) = fd;
Py_BEGIN_ALLOW_THREADS
res = sendmsg(conn, &msg, 0);
Py_END_ALLOW_THREADS
if (res < 0)
return PyErr_SetFromErrno(PyExc_OSError);
Py_RETURN_NONE;
}
static PyObject *
Billiard_multiprocessing_recvfd(PyObject *self, PyObject *args)
{
int conn, fd, res;
char dummy_char;
char buf[CMSG_SPACE(sizeof(int))];
struct msghdr msg = {0};
struct iovec dummy_iov;
struct cmsghdr *cmsg;
if (!PyArg_ParseTuple(args, "i", &conn))
return NULL;
dummy_iov.iov_base = &dummy_char;
dummy_iov.iov_len = 1;
msg.msg_control = buf;
msg.msg_controllen = sizeof(buf);
msg.msg_iov = &dummy_iov;
msg.msg_iovlen = 1;
cmsg = CMSG_FIRSTHDR(&msg);
cmsg->cmsg_level = SOL_SOCKET;
cmsg->cmsg_type = SCM_RIGHTS;
cmsg->cmsg_len = CMSG_LEN(sizeof(int));
msg.msg_controllen = cmsg->cmsg_len;
Py_BEGIN_ALLOW_THREADS
res = recvmsg(conn, &msg, 0);
Py_END_ALLOW_THREADS
if (res < 0)
return PyErr_SetFromErrno(PyExc_OSError);
fd = *(int*)CMSG_DATA(cmsg);
return Py_BuildValue("i", fd);
}
#endif /* HAVE_FD_TRANSFER */
#endif /* !MS_WINDOWS */
/*
* All platforms
*/
static PyObject*
Billiard_multiprocessing_address_of_buffer(PyObject *self, PyObject *obj)
{
void *buffer;
Py_ssize_t buffer_len;
if (PyObject_AsWriteBuffer(obj, &buffer, &buffer_len) < 0)
return NULL;
return Py_BuildValue("N" F_PY_SSIZE_T,
PyLong_FromVoidPtr(buffer), buffer_len);
}
#if !defined(MS_WINDOWS)
static PyObject *
Billiard_read(PyObject *self, PyObject *args)
{
int fd;
Py_buffer view;
Py_ssize_t buflen, recvlen = 0;
char *buf = NULL;
Py_ssize_t n = 0;
if (!PyArg_ParseTuple(args, "iw*|n", &fd, &view, &recvlen))
return NULL;
buflen = view.len;
buf = view.buf;
if (recvlen < 0) {
PyBuffer_Release(&view);
PyErr_SetString(PyExc_ValueError, "negative len for read");
return NULL;
}
if (recvlen == 0) {
recvlen = buflen;
}
if (buflen < recvlen) {
PyBuffer_Release(&view);
PyErr_SetString(PyExc_ValueError,
"Buffer too small for requested bytes");
return NULL;
}
if (buflen < 0 || buflen == 0) {
errno = EINVAL;
goto bail;
}
// Requires Python 2.7
//if (!_PyVerify_fd(fd)) goto bail;
Py_BEGIN_ALLOW_THREADS
n = read(fd, buf, recvlen);
Py_END_ALLOW_THREADS
if (n < 0) goto bail;
PyBuffer_Release(&view);
return PyInt_FromSsize_t(n);
bail:
PyBuffer_Release(&view);
return PyErr_SetFromErrno(PyExc_OSError);
}
# endif /* !MS_WINDOWS */
/*
* Function table
*/
static PyMethodDef Billiard_module_methods[] = {
{"address_of_buffer", Billiard_multiprocessing_address_of_buffer, METH_O,
"address_of_buffer(obj) -> int\n\n"
"Return address of obj assuming obj supports buffer inteface"},
#if HAVE_FD_TRANSFER
{"sendfd", Billiard_multiprocessing_sendfd, METH_VARARGS,
"sendfd(sockfd, fd) -> None\n\n"
"Send file descriptor given by fd over the unix domain socket\n"
"whose file decriptor is sockfd"},
{"recvfd", Billiard_multiprocessing_recvfd, METH_VARARGS,
"recvfd(sockfd) -> fd\n\n"
"Receive a file descriptor over a unix domain socket\n"
"whose file decriptor is sockfd"},
#endif
#if !defined(MS_WINDOWS)
{"read", Billiard_read, METH_VARARGS,
"read(fd, buffer) -> bytes\n\n"
"Read from file descriptor into buffer."},
#endif
{NULL}
};
/*
* Initialize
*/
PyMODINIT_FUNC
init_billiard(void)
{
PyObject *module, *temp, *value;
/* Initialize module */
module = Py_InitModule("_billiard", Billiard_module_methods);
if (!module)
return;
/* Get copy of objects from pickle */
temp = PyImport_ImportModule(PICKLE_MODULE);
if (!temp)
return;
Billiard_pickle_dumps = PyObject_GetAttrString(temp, "dumps");
Billiard_pickle_loads = PyObject_GetAttrString(temp, "loads");
Billiard_pickle_protocol = PyObject_GetAttrString(temp, "HIGHEST_PROTOCOL");
Py_XDECREF(temp);
/* Get copy of BufferTooShort */
temp = PyImport_ImportModule("billiard");
if (!temp)
return;
Billiard_BufferTooShort = PyObject_GetAttrString(temp, "BufferTooShort");
Py_XDECREF(temp);
/* Add connection type to module */
if (PyType_Ready(&BilliardConnectionType) < 0)
return;
Py_INCREF(&BilliardConnectionType);
PyModule_AddObject(module, "Connection", (PyObject*)&BilliardConnectionType);
#if defined(MS_WINDOWS) || \
(defined(HAVE_SEM_OPEN) && !defined(POSIX_SEMAPHORES_NOT_ENABLED))
/* Add SemLock type to module */
if (PyType_Ready(&BilliardSemLockType) < 0)
return;
Py_INCREF(&BilliardSemLockType);
PyDict_SetItemString(BilliardSemLockType.tp_dict, "SEM_VALUE_MAX",
Py_BuildValue("i", SEM_VALUE_MAX));
PyModule_AddObject(module, "SemLock", (PyObject*)&BilliardSemLockType);
#endif
#ifdef MS_WINDOWS
/* Add PipeConnection to module */
if (PyType_Ready(&BilliardPipeConnectionType) < 0)
return;
Py_INCREF(&BilliardPipeConnectionType);
PyModule_AddObject(module, "PipeConnection",
(PyObject*)&BilliardPipeConnectionType);
/* Initialize win32 class and add to multiprocessing */
temp = create_win32_namespace();
if (!temp)
return;
PyModule_AddObject(module, "win32", temp);
/* Initialize the event handle used to signal Ctrl-C */
sigint_event = CreateEvent(NULL, TRUE, FALSE, NULL);
if (!sigint_event) {
PyErr_SetFromWindowsErr(0);
return;
}
if (!SetConsoleCtrlHandler(ProcessingCtrlHandler, TRUE)) {
PyErr_SetFromWindowsErr(0);
return;
}
#endif
/* Add configuration macros */
temp = PyDict_New();
if (!temp)
return;
#define ADD_FLAG(name) \
value = Py_BuildValue("i", name); \
if (value == NULL) { Py_DECREF(temp); return; } \
if (PyDict_SetItemString(temp, #name, value) < 0) { \
Py_DECREF(temp); Py_DECREF(value); return; } \
Py_DECREF(value)
#if defined(HAVE_SEM_OPEN) && !defined(POSIX_SEMAPHORES_NOT_ENABLED)
ADD_FLAG(HAVE_SEM_OPEN);
#endif
#ifdef HAVE_SEM_TIMEDWAIT
ADD_FLAG(HAVE_SEM_TIMEDWAIT);
#endif
#ifdef HAVE_FD_TRANSFER
ADD_FLAG(HAVE_FD_TRANSFER);
#endif
#ifdef HAVE_BROKEN_SEM_GETVALUE
ADD_FLAG(HAVE_BROKEN_SEM_GETVALUE);
#endif
#ifdef HAVE_BROKEN_SEM_UNLINK
ADD_FLAG(HAVE_BROKEN_SEM_UNLINK);
#endif
if (PyModule_AddObject(module, "flags", temp) < 0)
return;
}

View File

@ -0,0 +1,189 @@
#ifndef MULTIPROCESSING_H
#define MULTIPROCESSING_H
#define PY_SSIZE_T_CLEAN
#ifdef __sun
/* The control message API is only available on Solaris
if XPG 4.2 or later is requested. */
#define _XOPEN_SOURCE 500
#endif
#include "Python.h"
#include "structmember.h"
#include "pythread.h"
/*
* Platform includes and definitions
*/
#ifdef MS_WINDOWS
# define WIN32_LEAN_AND_MEAN
# include <windows.h>
# include <winsock2.h>
# include <process.h> /* getpid() */
# ifdef Py_DEBUG
# include <crtdbg.h>
# endif
# define SEM_HANDLE HANDLE
# define SEM_VALUE_MAX LONG_MAX
#else
# include <fcntl.h> /* O_CREAT and O_EXCL */
# include <netinet/in.h>
# include <sys/socket.h>
# include <sys/uio.h>
# include <arpa/inet.h> /* htonl() and ntohl() */
# if defined(HAVE_SEM_OPEN) && !defined(POSIX_SEMAPHORES_NOT_ENABLED)
# include <semaphore.h>
typedef sem_t *SEM_HANDLE;
# endif
# define HANDLE int
# define SOCKET int
# define BOOL int
# define UINT32 uint32_t
# define INT32 int32_t
# define TRUE 1
# define FALSE 0
# define INVALID_HANDLE_VALUE (-1)
#endif
/*
* Issue 3110 - Solaris does not define SEM_VALUE_MAX
*/
#ifndef SEM_VALUE_MAX
#if defined(HAVE_SYSCONF) && defined(_SC_SEM_VALUE_MAX)
# define SEM_VALUE_MAX sysconf(_SC_SEM_VALUE_MAX)
#elif defined(_SEM_VALUE_MAX)
# define SEM_VALUE_MAX _SEM_VALUE_MAX
#elif defined(_POSIX_SEM_VALUE_MAX)
# define SEM_VALUE_MAX _POSIX_SEM_VALUE_MAX
#else
# define SEM_VALUE_MAX INT_MAX
#endif
#endif
/*
* Make sure Py_ssize_t available
*/
#if PY_VERSION_HEX < 0x02050000 && !defined(PY_SSIZE_T_MIN)
typedef int Py_ssize_t;
# define PY_SSIZE_T_MAX INT_MAX
# define PY_SSIZE_T_MIN INT_MIN
# define F_PY_SSIZE_T "i"
# define PyInt_FromSsize_t(n) PyInt_FromLong((long)n)
#else
# define F_PY_SSIZE_T "n"
#endif
/*
* Format codes
*/
#if SIZEOF_VOID_P == SIZEOF_LONG
# define F_POINTER "k"
# define T_POINTER T_ULONG
#elif defined(HAVE_LONG_LONG) && (SIZEOF_VOID_P == SIZEOF_LONG_LONG)
# define F_POINTER "K"
# define T_POINTER T_ULONGLONG
#else
# error "can't find format code for unsigned integer of same size as void*"
#endif
#ifdef MS_WINDOWS
# define F_HANDLE F_POINTER
# define T_HANDLE T_POINTER
# define F_SEM_HANDLE F_HANDLE
# define T_SEM_HANDLE T_HANDLE
# define F_DWORD "k"
# define T_DWORD T_ULONG
#else
# define F_HANDLE "i"
# define T_HANDLE T_INT
# define F_SEM_HANDLE F_POINTER
# define T_SEM_HANDLE T_POINTER
#endif
#if PY_VERSION_HEX >= 0x03000000
# define F_RBUFFER "y"
#else
# define F_RBUFFER "s"
#endif
/*
* Error codes which can be returned by functions called without GIL
*/
#define MP_SUCCESS (0)
#define MP_STANDARD_ERROR (-1)
#define MP_MEMORY_ERROR (-1001)
#define MP_END_OF_FILE (-1002)
#define MP_EARLY_END_OF_FILE (-1003)
#define MP_BAD_MESSAGE_LENGTH (-1004)
#define MP_SOCKET_ERROR (-1005)
#define MP_EXCEPTION_HAS_BEEN_SET (-1006)
PyObject *Billiard_SetError(PyObject *Type, int num);
/*
* Externs - not all will really exist on all platforms
*/
extern PyObject *Billiard_pickle_dumps;
extern PyObject *Billiard_pickle_loads;
extern PyObject *Billiard_pickle_protocol;
extern PyObject *Billiard_BufferTooShort;
extern PyTypeObject BilliardSemLockType;
extern PyTypeObject BilliardConnectionType;
extern PyTypeObject BilliardPipeConnectionType;
extern HANDLE sigint_event;
/*
* Py3k compatibility
*/
#if PY_VERSION_HEX >= 0x03000000
# define PICKLE_MODULE "pickle"
# define FROM_FORMAT PyUnicode_FromFormat
# define PyInt_FromLong PyLong_FromLong
# define PyInt_FromSsize_t PyLong_FromSsize_t
#else
# define PICKLE_MODULE "cPickle"
# define FROM_FORMAT PyString_FromFormat
#endif
#ifndef PyVarObject_HEAD_INIT
# define PyVarObject_HEAD_INIT(type, size) PyObject_HEAD_INIT(type) size,
#endif
#ifndef Py_TPFLAGS_HAVE_WEAKREFS
# define Py_TPFLAGS_HAVE_WEAKREFS 0
#endif
/*
* Connection definition
*/
#define CONNECTION_BUFFER_SIZE 131072
typedef struct {
PyObject_HEAD
HANDLE handle;
int flags;
PyObject *weakreflist;
char buffer[CONNECTION_BUFFER_SIZE];
} BilliardConnectionObject;
/*
* Miscellaneous
*/
#define MAX_MESSAGE_LENGTH 0x7fffffff
#ifndef MIN
# define MIN(x, y) ((x) < (y) ? x : y)
# define MAX(x, y) ((x) > (y) ? x : y)
#endif
#endif /* MULTIPROCESSING_H */

View File

@ -0,0 +1,149 @@
/*
* A type which wraps a pipe handle in message oriented mode
*
* pipe_connection.c
*
* Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt
*/
#include "multiprocessing.h"
#define CLOSE(h) CloseHandle(h)
/*
* Send string to the pipe; assumes in message oriented mode
*/
static Py_ssize_t
Billiard_conn_send_string(BilliardConnectionObject *conn, char *string, size_t length)
{
DWORD amount_written;
BOOL ret;
Py_BEGIN_ALLOW_THREADS
ret = WriteFile(conn->handle, string, length, &amount_written, NULL);
Py_END_ALLOW_THREADS
if (ret == 0 && GetLastError() == ERROR_NO_SYSTEM_RESOURCES) {
PyErr_Format(PyExc_ValueError, "Cannnot send %" PY_FORMAT_SIZE_T "d bytes over connection", length);
return MP_STANDARD_ERROR;
}
return ret ? MP_SUCCESS : MP_STANDARD_ERROR;
}
/*
* Attempts to read into buffer, or if buffer too small into *newbuffer.
*
* Returns number of bytes read. Assumes in message oriented mode.
*/
static Py_ssize_t
Billiard_conn_recv_string(BilliardConnectionObject *conn, char *buffer,
size_t buflength, char **newbuffer, size_t maxlength)
{
DWORD left, length, full_length, err;
BOOL ret;
*newbuffer = NULL;
Py_BEGIN_ALLOW_THREADS
ret = ReadFile(conn->handle, buffer, MIN(buflength, maxlength),
&length, NULL);
Py_END_ALLOW_THREADS
if (ret)
return length;
err = GetLastError();
if (err != ERROR_MORE_DATA) {
if (err == ERROR_BROKEN_PIPE)
return MP_END_OF_FILE;
return MP_STANDARD_ERROR;
}
if (!PeekNamedPipe(conn->handle, NULL, 0, NULL, NULL, &left))
return MP_STANDARD_ERROR;
full_length = length + left;
if (full_length > maxlength)
return MP_BAD_MESSAGE_LENGTH;
*newbuffer = PyMem_Malloc(full_length);
if (*newbuffer == NULL)
return MP_MEMORY_ERROR;
memcpy(*newbuffer, buffer, length);
Py_BEGIN_ALLOW_THREADS
ret = ReadFile(conn->handle, *newbuffer+length, left, &length, NULL);
Py_END_ALLOW_THREADS
if (ret) {
assert(length == left);
return full_length;
} else {
PyMem_Free(*newbuffer);
return MP_STANDARD_ERROR;
}
}
/*
* Check whether any data is available for reading
*/
static int
Billiard_conn_poll(BilliardConnectionObject *conn, double timeout, PyThreadState *_save)
{
DWORD bytes, deadline, delay;
int difference, res;
BOOL block = FALSE;
if (!PeekNamedPipe(conn->handle, NULL, 0, NULL, &bytes, NULL))
return MP_STANDARD_ERROR;
if (timeout == 0.0)
return bytes > 0;
if (timeout < 0.0)
block = TRUE;
else
/* XXX does not check for overflow */
deadline = GetTickCount() + (DWORD)(1000 * timeout + 0.5);
Sleep(0);
for (delay = 1 ; ; delay += 1) {
if (!PeekNamedPipe(conn->handle, NULL, 0, NULL, &bytes, NULL))
return MP_STANDARD_ERROR;
else if (bytes > 0)
return TRUE;
if (!block) {
difference = deadline - GetTickCount();
if (difference < 0)
return FALSE;
if ((int)delay > difference)
delay = difference;
}
if (delay > 20)
delay = 20;
Sleep(delay);
/* check for signals */
Py_BLOCK_THREADS
res = PyErr_CheckSignals();
Py_UNBLOCK_THREADS
if (res)
return MP_EXCEPTION_HAS_BEEN_SET;
}
}
/*
* "connection.h" defines the PipeConnection type using the definitions above
*/
#define CONNECTION_NAME "PipeConnection"
#define CONNECTION_TYPE BilliardPipeConnectionType
#include "connection.h"

View File

@ -0,0 +1,666 @@
/*
* A type which wraps a semaphore
*
* semaphore.c
*
* Copyright (c) 2006-2008, R Oudkerk
* Licensed to PSF under a Contributor Agreement.
*/
#include "multiprocessing.h"
enum { RECURSIVE_MUTEX, SEMAPHORE };
typedef struct {
PyObject_HEAD
SEM_HANDLE handle;
long last_tid;
int count;
int maxvalue;
int kind;
char *name;
} BilliardSemLockObject;
#define ISMINE(o) (o->count > 0 && PyThread_get_thread_ident() == o->last_tid)
#ifdef MS_WINDOWS
/*
* Windows definitions
*/
#define SEM_FAILED NULL
#define SEM_CLEAR_ERROR() SetLastError(0)
#define SEM_GET_LAST_ERROR() GetLastError()
#define SEM_CREATE(name, val, max) CreateSemaphore(NULL, val, max, NULL)
#define SEM_CLOSE(sem) (CloseHandle(sem) ? 0 : -1)
#define SEM_GETVALUE(sem, pval) _Billiard_GetSemaphoreValue(sem, pval)
#define SEM_UNLINK(name) 0
static int
_Billiard_GetSemaphoreValue(HANDLE handle, long *value)
{
long previous;
switch (WaitForSingleObject(handle, 0)) {
case WAIT_OBJECT_0:
if (!ReleaseSemaphore(handle, 1, &previous))
return MP_STANDARD_ERROR;
*value = previous + 1;
return 0;
case WAIT_TIMEOUT:
*value = 0;
return 0;
default:
return MP_STANDARD_ERROR;
}
}
static PyObject *
Billiard_semlock_acquire(BilliardSemLockObject *self, PyObject *args, PyObject *kwds)
{
int blocking = 1;
double timeout;
PyObject *timeout_obj = Py_None;
DWORD res, full_msecs, msecs, start, ticks;
static char *kwlist[] = {"block", "timeout", NULL};
if (!PyArg_ParseTupleAndKeywords(args, kwds, "|iO", kwlist,
&blocking, &timeout_obj))
return NULL;
/* calculate timeout */
if (!blocking) {
full_msecs = 0;
} else if (timeout_obj == Py_None) {
full_msecs = INFINITE;
} else {
timeout = PyFloat_AsDouble(timeout_obj);
if (PyErr_Occurred())
return NULL;
timeout *= 1000.0; /* convert to millisecs */
if (timeout < 0.0) {
timeout = 0.0;
} else if (timeout >= 0.5 * INFINITE) { /* 25 days */
PyErr_SetString(PyExc_OverflowError,
"timeout is too large");
return NULL;
}
full_msecs = (DWORD)(timeout + 0.5);
}
/* check whether we already own the lock */
if (self->kind == RECURSIVE_MUTEX && ISMINE(self)) {
++self->count;
Py_RETURN_TRUE;
}
/* check whether we can acquire without blocking */
if (WaitForSingleObject(self->handle, 0) == WAIT_OBJECT_0) {
self->last_tid = GetCurrentThreadId();
++self->count;
Py_RETURN_TRUE;
}
msecs = full_msecs;
start = GetTickCount();
for ( ; ; ) {
HANDLE handles[2] = {self->handle, sigint_event};
/* do the wait */
Py_BEGIN_ALLOW_THREADS
ResetEvent(sigint_event);
res = WaitForMultipleObjects(2, handles, FALSE, msecs);
Py_END_ALLOW_THREADS
/* handle result */
if (res != WAIT_OBJECT_0 + 1)
break;
/* got SIGINT so give signal handler a chance to run */
Sleep(1);
/* if this is main thread let KeyboardInterrupt be raised */
if (PyErr_CheckSignals())
return NULL;
/* recalculate timeout */
if (msecs != INFINITE) {
ticks = GetTickCount();
if ((DWORD)(ticks - start) >= full_msecs)
Py_RETURN_FALSE;
msecs = full_msecs - (ticks - start);
}
}
/* handle result */
switch (res) {
case WAIT_TIMEOUT:
Py_RETURN_FALSE;
case WAIT_OBJECT_0:
self->last_tid = GetCurrentThreadId();
++self->count;
Py_RETURN_TRUE;
case WAIT_FAILED:
return PyErr_SetFromWindowsErr(0);
default:
PyErr_Format(PyExc_RuntimeError, "WaitForSingleObject() or "
"WaitForMultipleObjects() gave unrecognized "
"value %d", res);
return NULL;
}
}
static PyObject *
Billiard_semlock_release(BilliardSemLockObject *self, PyObject *args)
{
if (self->kind == RECURSIVE_MUTEX) {
if (!ISMINE(self)) {
PyErr_SetString(PyExc_AssertionError, "attempt to "
"release recursive lock not owned "
"by thread");
return NULL;
}
if (self->count > 1) {
--self->count;
Py_RETURN_NONE;
}
assert(self->count == 1);
}
if (!ReleaseSemaphore(self->handle, 1, NULL)) {
if (GetLastError() == ERROR_TOO_MANY_POSTS) {
PyErr_SetString(PyExc_ValueError, "semaphore or lock "
"released too many times");
return NULL;
} else {
return PyErr_SetFromWindowsErr(0);
}
}
--self->count;
Py_RETURN_NONE;
}
#else /* !MS_WINDOWS */
/*
* Unix definitions
*/
#define SEM_CLEAR_ERROR()
#define SEM_GET_LAST_ERROR() 0
#define SEM_CREATE(name, val, max) sem_open(name, O_CREAT | O_EXCL, 0600, val)
#define SEM_CLOSE(sem) sem_close(sem)
#define SEM_GETVALUE(sem, pval) sem_getvalue(sem, pval)
#define SEM_UNLINK(name) sem_unlink(name)
#ifndef HAVE_SEM_UNLINK
# define sem_unlink(name) 0
#endif
//#ifndef HAVE_SEM_TIMEDWAIT
# define sem_timedwait(sem,deadline) Billiard_sem_timedwait_save(sem,deadline,_save)
int
Billiard_sem_timedwait_save(sem_t *sem, struct timespec *deadline, PyThreadState *_save)
{
int res;
unsigned long delay, difference;
struct timeval now, tvdeadline, tvdelay;
errno = 0;
tvdeadline.tv_sec = deadline->tv_sec;
tvdeadline.tv_usec = deadline->tv_nsec / 1000;
for (delay = 0 ; ; delay += 1000) {
/* poll */
if (sem_trywait(sem) == 0)
return 0;
else if (errno != EAGAIN)
return MP_STANDARD_ERROR;
/* get current time */
if (gettimeofday(&now, NULL) < 0)
return MP_STANDARD_ERROR;
/* check for timeout */
if (tvdeadline.tv_sec < now.tv_sec ||
(tvdeadline.tv_sec == now.tv_sec &&
tvdeadline.tv_usec <= now.tv_usec)) {
errno = ETIMEDOUT;
return MP_STANDARD_ERROR;
}
/* calculate how much time is left */
difference = (tvdeadline.tv_sec - now.tv_sec) * 1000000 +
(tvdeadline.tv_usec - now.tv_usec);
/* check delay not too long -- maximum is 20 msecs */
if (delay > 20000)
delay = 20000;
if (delay > difference)
delay = difference;
/* sleep */
tvdelay.tv_sec = delay / 1000000;
tvdelay.tv_usec = delay % 1000000;
if (select(0, NULL, NULL, NULL, &tvdelay) < 0)
return MP_STANDARD_ERROR;
/* check for signals */
Py_BLOCK_THREADS
res = PyErr_CheckSignals();
Py_UNBLOCK_THREADS
if (res) {
errno = EINTR;
return MP_EXCEPTION_HAS_BEEN_SET;
}
}
}
//#endif /* !HAVE_SEM_TIMEDWAIT */
static PyObject *
Billiard_semlock_acquire(BilliardSemLockObject *self, PyObject *args, PyObject *kwds)
{
int blocking = 1, res;
double timeout;
PyObject *timeout_obj = Py_None;
struct timespec deadline = {0};
struct timeval now;
long sec, nsec;
static char *kwlist[] = {"block", "timeout", NULL};
if (!PyArg_ParseTupleAndKeywords(args, kwds, "|iO", kwlist,
&blocking, &timeout_obj))
return NULL;
if (self->kind == RECURSIVE_MUTEX && ISMINE(self)) {
++self->count;
Py_RETURN_TRUE;
}
if (timeout_obj != Py_None) {
timeout = PyFloat_AsDouble(timeout_obj);
if (PyErr_Occurred())
return NULL;
if (timeout < 0.0)
timeout = 0.0;
if (gettimeofday(&now, NULL) < 0) {
PyErr_SetFromErrno(PyExc_OSError);
return NULL;
}
sec = (long) timeout;
nsec = (long) (1e9 * (timeout - sec) + 0.5);
deadline.tv_sec = now.tv_sec + sec;
deadline.tv_nsec = now.tv_usec * 1000 + nsec;
deadline.tv_sec += (deadline.tv_nsec / 1000000000);
deadline.tv_nsec %= 1000000000;
}
do {
Py_BEGIN_ALLOW_THREADS
if (blocking && timeout_obj == Py_None)
res = sem_wait(self->handle);
else if (!blocking)
res = sem_trywait(self->handle);
else
res = sem_timedwait(self->handle, &deadline);
Py_END_ALLOW_THREADS
if (res == MP_EXCEPTION_HAS_BEEN_SET)
break;
} while (res < 0 && errno == EINTR && !PyErr_CheckSignals());
if (res < 0) {
if (errno == EAGAIN || errno == ETIMEDOUT)
Py_RETURN_FALSE;
else if (errno == EINTR)
return NULL;
else
return PyErr_SetFromErrno(PyExc_OSError);
}
++self->count;
self->last_tid = PyThread_get_thread_ident();
Py_RETURN_TRUE;
}
static PyObject *
Billiard_semlock_release(BilliardSemLockObject *self, PyObject *args)
{
if (self->kind == RECURSIVE_MUTEX) {
if (!ISMINE(self)) {
PyErr_SetString(PyExc_AssertionError, "attempt to "
"release recursive lock not owned "
"by thread");
return NULL;
}
if (self->count > 1) {
--self->count;
Py_RETURN_NONE;
}
assert(self->count == 1);
} else {
#ifdef HAVE_BROKEN_SEM_GETVALUE
/* We will only check properly the maxvalue == 1 case */
if (self->maxvalue == 1) {
/* make sure that already locked */
if (sem_trywait(self->handle) < 0) {
if (errno != EAGAIN) {
PyErr_SetFromErrno(PyExc_OSError);
return NULL;
}
/* it is already locked as expected */
} else {
/* it was not locked so undo wait and raise */
if (sem_post(self->handle) < 0) {
PyErr_SetFromErrno(PyExc_OSError);
return NULL;
}
PyErr_SetString(PyExc_ValueError, "semaphore "
"or lock released too many "
"times");
return NULL;
}
}
#else
int sval;
/* This check is not an absolute guarantee that the semaphore
does not rise above maxvalue. */
if (sem_getvalue(self->handle, &sval) < 0) {
return PyErr_SetFromErrno(PyExc_OSError);
} else if (sval >= self->maxvalue) {
PyErr_SetString(PyExc_ValueError, "semaphore or lock "
"released too many times");
return NULL;
}
#endif
}
if (sem_post(self->handle) < 0)
return PyErr_SetFromErrno(PyExc_OSError);
--self->count;
Py_RETURN_NONE;
}
#endif /* !MS_WINDOWS */
/*
* All platforms
*/
static PyObject *
Billiard_newsemlockobject(PyTypeObject *type, SEM_HANDLE handle, int kind, int maxvalue,
char *name)
{
BilliardSemLockObject *self;
self = PyObject_New(BilliardSemLockObject, type);
if (!self)
return NULL;
self->handle = handle;
self->kind = kind;
self->count = 0;
self->last_tid = 0;
self->maxvalue = maxvalue;
self->name = name;
return (PyObject*)self;
}
static PyObject *
Billiard_semlock_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
SEM_HANDLE handle = SEM_FAILED;
int kind, maxvalue, value, unlink;
PyObject *result;
char *name, *name_copy = NULL;
static char *kwlist[] = {"kind", "value", "maxvalue", "name", "unlink",
NULL};
if (!PyArg_ParseTupleAndKeywords(args, kwds, "iiisi", kwlist,
&kind, &value, &maxvalue, &name, &unlink))
return NULL;
if (kind != RECURSIVE_MUTEX && kind != SEMAPHORE) {
PyErr_SetString(PyExc_ValueError, "unrecognized kind");
return NULL;
}
if (!unlink) {
name_copy = PyMem_Malloc(strlen(name) + 1);
if (name_copy == NULL)
goto failure;
strcpy(name_copy, name);
}
SEM_CLEAR_ERROR();
handle = SEM_CREATE(name, value, maxvalue);
/* On Windows we should fail if GetLastError()==ERROR_ALREADY_EXISTS */
if (handle == SEM_FAILED || SEM_GET_LAST_ERROR() != 0)
goto failure;
if (unlink && SEM_UNLINK(name) < 0)
goto failure;
result = Billiard_newsemlockobject(type, handle, kind, maxvalue, name_copy);
if (!result)
goto failure;
return result;
failure:
if (handle != SEM_FAILED)
SEM_CLOSE(handle);
PyMem_Free(name_copy);
Billiard_SetError(NULL, MP_STANDARD_ERROR);
return NULL;
}
static PyObject *
Billiard_semlock_rebuild(PyTypeObject *type, PyObject *args)
{
SEM_HANDLE handle;
int kind, maxvalue;
char *name;
if (!PyArg_ParseTuple(args, F_SEM_HANDLE "iiz",
&handle, &kind, &maxvalue, &name))
return NULL;
#ifndef MS_WINDOWS
if (name != NULL) {
handle = sem_open(name, 0);
if (handle == SEM_FAILED)
return NULL;
}
#endif
return Billiard_newsemlockobject(type, handle, kind, maxvalue, name);
}
static void
Billiard_semlock_dealloc(BilliardSemLockObject* self)
{
if (self->handle != SEM_FAILED)
SEM_CLOSE(self->handle);
PyMem_Free(self->name);
PyObject_Del(self);
}
static PyObject *
Billiard_semlock_count(BilliardSemLockObject *self)
{
return PyInt_FromLong((long)self->count);
}
static PyObject *
Billiard_semlock_ismine(BilliardSemLockObject *self)
{
/* only makes sense for a lock */
return PyBool_FromLong(ISMINE(self));
}
static PyObject *
Billiard_semlock_getvalue(BilliardSemLockObject *self)
{
#ifdef HAVE_BROKEN_SEM_GETVALUE
PyErr_SetNone(PyExc_NotImplementedError);
return NULL;
#else
int sval;
if (SEM_GETVALUE(self->handle, &sval) < 0)
return Billiard_SetError(NULL, MP_STANDARD_ERROR);
/* some posix implementations use negative numbers to indicate
the number of waiting threads */
if (sval < 0)
sval = 0;
return PyInt_FromLong((long)sval);
#endif
}
static PyObject *
Billiard_semlock_iszero(BilliardSemLockObject *self)
{
#ifdef HAVE_BROKEN_SEM_GETVALUE
if (sem_trywait(self->handle) < 0) {
if (errno == EAGAIN)
Py_RETURN_TRUE;
return Billiard_SetError(NULL, MP_STANDARD_ERROR);
} else {
if (sem_post(self->handle) < 0)
return Billiard_SetError(NULL, MP_STANDARD_ERROR);
Py_RETURN_FALSE;
}
#else
int sval;
if (SEM_GETVALUE(self->handle, &sval) < 0)
return Billiard_SetError(NULL, MP_STANDARD_ERROR);
return PyBool_FromLong((long)sval == 0);
#endif
}
static PyObject *
Billiard_semlock_afterfork(BilliardSemLockObject *self)
{
self->count = 0;
Py_RETURN_NONE;
}
static PyObject *
Billiard_semlock_unlink(PyObject *ignore, PyObject *args)
{
char *name;
if (!PyArg_ParseTuple(args, "s", &name))
return NULL;
if (SEM_UNLINK(name) < 0) {
Billiard_SetError(NULL, MP_STANDARD_ERROR);
return NULL;
}
Py_RETURN_NONE;
}
/*
* Semaphore methods
*/
static PyMethodDef Billiard_semlock_methods[] = {
{"acquire", (PyCFunction)Billiard_semlock_acquire, METH_VARARGS | METH_KEYWORDS,
"acquire the semaphore/lock"},
{"release", (PyCFunction)Billiard_semlock_release, METH_NOARGS,
"release the semaphore/lock"},
{"__enter__", (PyCFunction)Billiard_semlock_acquire, METH_VARARGS | METH_KEYWORDS,
"enter the semaphore/lock"},
{"__exit__", (PyCFunction)Billiard_semlock_release, METH_VARARGS,
"exit the semaphore/lock"},
{"_count", (PyCFunction)Billiard_semlock_count, METH_NOARGS,
"num of `acquire()`s minus num of `release()`s for this process"},
{"_is_mine", (PyCFunction)Billiard_semlock_ismine, METH_NOARGS,
"whether the lock is owned by this thread"},
{"_get_value", (PyCFunction)Billiard_semlock_getvalue, METH_NOARGS,
"get the value of the semaphore"},
{"_is_zero", (PyCFunction)Billiard_semlock_iszero, METH_NOARGS,
"returns whether semaphore has value zero"},
{"_rebuild", (PyCFunction)Billiard_semlock_rebuild, METH_VARARGS | METH_CLASS,
""},
{"_after_fork", (PyCFunction)Billiard_semlock_afterfork, METH_NOARGS,
"rezero the net acquisition count after fork()"},
{"sem_unlink", (PyCFunction)Billiard_semlock_unlink, METH_VARARGS | METH_STATIC,
"unlink the named semaphore using sem_unlink()"},
{NULL}
};
/*
* Member table
*/
static PyMemberDef Billiard_semlock_members[] = {
{"handle", T_SEM_HANDLE, offsetof(BilliardSemLockObject, handle), READONLY,
""},
{"kind", T_INT, offsetof(BilliardSemLockObject, kind), READONLY,
""},
{"maxvalue", T_INT, offsetof(BilliardSemLockObject, maxvalue), READONLY,
""},
{"name", T_STRING, offsetof(BilliardSemLockObject, name), READONLY,
""},
{NULL}
};
/*
* Semaphore type
*/
PyTypeObject BilliardSemLockType = {
PyVarObject_HEAD_INIT(NULL, 0)
/* tp_name */ "_billiard.SemLock",
/* tp_basicsize */ sizeof(BilliardSemLockObject),
/* tp_itemsize */ 0,
/* tp_dealloc */ (destructor)Billiard_semlock_dealloc,
/* tp_print */ 0,
/* tp_getattr */ 0,
/* tp_setattr */ 0,
/* tp_compare */ 0,
/* tp_repr */ 0,
/* tp_as_number */ 0,
/* tp_as_sequence */ 0,
/* tp_as_mapping */ 0,
/* tp_hash */ 0,
/* tp_call */ 0,
/* tp_str */ 0,
/* tp_getattro */ 0,
/* tp_setattro */ 0,
/* tp_as_buffer */ 0,
/* tp_flags */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,
/* tp_doc */ "Semaphore/Mutex type",
/* tp_traverse */ 0,
/* tp_clear */ 0,
/* tp_richcompare */ 0,
/* tp_weaklistoffset */ 0,
/* tp_iter */ 0,
/* tp_iternext */ 0,
/* tp_methods */ Billiard_semlock_methods,
/* tp_members */ Billiard_semlock_members,
/* tp_getset */ 0,
/* tp_base */ 0,
/* tp_dict */ 0,
/* tp_descr_get */ 0,
/* tp_descr_set */ 0,
/* tp_dictoffset */ 0,
/* tp_init */ 0,
/* tp_alloc */ 0,
/* tp_new */ Billiard_semlock_new,
};

View File

@ -0,0 +1,225 @@
/*
* A type which wraps a socket
*
* socket_connection.c
*
* Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt
*/
#include "multiprocessing.h"
#ifdef MS_WINDOWS
# define WRITE(h, buffer, length) send((SOCKET)h, buffer, length, 0)
# define READ(h, buffer, length) recv((SOCKET)h, buffer, length, 0)
# define CLOSE(h) closesocket((SOCKET)h)
#else
# define WRITE(h, buffer, length) write(h, buffer, length)
# define READ(h, buffer, length) read(h, buffer, length)
# define CLOSE(h) close(h)
#endif
/*
* Send string to file descriptor
*/
void _Billiard_setblocking(int fd, int blocking)
{
#ifdef MS_WINDOWS
unsigned long mode = blocking ? 0 : 1;
ioctlsocket(fd, FIONBIO, &mode);
#else
int flags = fcntl(fd, F_GETFL, 0);
if (flags > 0) {
flags = blocking ? (flags &~ O_NONBLOCK) : (flags | O_NONBLOCK);
fcntl(fd, F_SETFL, flags);
}
#endif
}
ssize_t
_Billiard_conn_send_offset(HANDLE fd, char *string, Py_ssize_t len, Py_ssize_t offset) {
char *p = string;
p += offset;
return WRITE(fd, p, (size_t)len - offset);
}
static Py_ssize_t
_Billiard_conn_sendall(HANDLE h, char *string, size_t length)
{
char *p = string;
Py_ssize_t res;
while (length > 0) {
res = WRITE(h, p, length);
if (res < 0)
return MP_SOCKET_ERROR;
length -= res;
p += res;
}
return MP_SUCCESS;
}
/*
* Receive string of exact length from file descriptor
*/
static Py_ssize_t
_Billiard_conn_recvall(HANDLE h, char *buffer, size_t length)
{
size_t remaining = length;
Py_ssize_t temp;
char *p = buffer;
while (remaining > 0) {
temp = READ(h, p, remaining);
if (temp <= 0) {
if (temp == 0)
return remaining == length ?
MP_END_OF_FILE : MP_EARLY_END_OF_FILE;
else
return temp;
}
remaining -= temp;
p += temp;
}
return MP_SUCCESS;
}
/*
* Send a string prepended by the string length in network byte order
*/
static Py_ssize_t
Billiard_conn_send_string(BilliardConnectionObject *conn, char *string, size_t length)
{
Py_ssize_t res;
/* The "header" of the message is a 32 bit unsigned number (in
network order) which specifies the length of the "body". If
the message is shorter than about 16kb then it is quicker to
combine the "header" and the "body" of the message and send
them at once. */
if (length < (16*1024)) {
char *message;
message = PyMem_Malloc(length+4);
if (message == NULL)
return MP_MEMORY_ERROR;
*(UINT32*)message = htonl((UINT32)length);
memcpy(message+4, string, length);
Py_BEGIN_ALLOW_THREADS
res = _Billiard_conn_sendall(conn->handle, message, length+4);
Py_END_ALLOW_THREADS
PyMem_Free(message);
} else {
UINT32 lenbuff;
if (length > MAX_MESSAGE_LENGTH)
return MP_BAD_MESSAGE_LENGTH;
lenbuff = htonl((UINT32)length);
Py_BEGIN_ALLOW_THREADS
res = _Billiard_conn_sendall(conn->handle, (char*)&lenbuff, 4);
if (res == MP_SUCCESS)
res = _Billiard_conn_sendall(conn->handle, string, length);
Py_END_ALLOW_THREADS
}
return res;
}
/*
* Attempts to read into buffer, or failing that into *newbuffer
*
* Returns number of bytes read.
*/
static Py_ssize_t
Billiard_conn_recv_string(BilliardConnectionObject *conn, char *buffer,
size_t buflength, char **newbuffer, size_t maxlength)
{
int res;
UINT32 ulength;
*newbuffer = NULL;
Py_BEGIN_ALLOW_THREADS
res = _Billiard_conn_recvall(conn->handle, (char*)&ulength, 4);
Py_END_ALLOW_THREADS
if (res < 0)
return res;
ulength = ntohl(ulength);
if (ulength > maxlength)
return MP_BAD_MESSAGE_LENGTH;
if (ulength <= buflength) {
Py_BEGIN_ALLOW_THREADS
res = _Billiard_conn_recvall(conn->handle, buffer, (size_t)ulength);
Py_END_ALLOW_THREADS
return res < 0 ? res : ulength;
} else {
*newbuffer = PyMem_Malloc((size_t)ulength);
if (*newbuffer == NULL)
return MP_MEMORY_ERROR;
Py_BEGIN_ALLOW_THREADS
res = _Billiard_conn_recvall(conn->handle, *newbuffer, (size_t)ulength);
Py_END_ALLOW_THREADS
return res < 0 ? (Py_ssize_t)res : (Py_ssize_t)ulength;
}
}
/*
* Check whether any data is available for reading -- neg timeout blocks
*/
static int
Billiard_conn_poll(BilliardConnectionObject *conn, double timeout, PyThreadState *_save)
{
int res;
fd_set rfds;
/*
* Verify the handle, issue 3321. Not required for windows.
*/
#ifndef MS_WINDOWS
if (((int)conn->handle) < 0 || ((int)conn->handle) >= FD_SETSIZE) {
Py_BLOCK_THREADS
PyErr_SetString(PyExc_IOError, "handle out of range in select()");
Py_UNBLOCK_THREADS
return MP_EXCEPTION_HAS_BEEN_SET;
}
#endif
FD_ZERO(&rfds);
FD_SET((SOCKET)conn->handle, &rfds);
if (timeout < 0.0) {
res = select((int)conn->handle+1, &rfds, NULL, NULL, NULL);
} else {
struct timeval tv;
tv.tv_sec = (long)timeout;
tv.tv_usec = (long)((timeout - tv.tv_sec) * 1e6 + 0.5);
res = select((int)conn->handle+1, &rfds, NULL, NULL, &tv);
}
if (res < 0) {
return MP_SOCKET_ERROR;
} else if (FD_ISSET(conn->handle, &rfds)) {
return TRUE;
} else {
assert(res == 0);
return FALSE;
}
}
/*
* "connection.h" defines the Connection type using defs above
*/
#define CONNECTION_NAME "Connection"
#define CONNECTION_TYPE BilliardConnectionType
#include "connection.h"

View File

@ -0,0 +1,266 @@
/*
* Win32 functions used by multiprocessing package
*
* win32_functions.c
*
* Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt
*/
#include "multiprocessing.h"
#define WIN32_FUNCTION(func) \
{#func, (PyCFunction)win32_ ## func, METH_VARARGS | METH_STATIC, ""}
#define WIN32_CONSTANT(fmt, con) \
PyDict_SetItemString(Win32Type.tp_dict, #con, Py_BuildValue(fmt, con))
static PyObject *
win32_CloseHandle(PyObject *self, PyObject *args)
{
HANDLE hObject;
BOOL success;
if (!PyArg_ParseTuple(args, F_HANDLE, &hObject))
return NULL;
Py_BEGIN_ALLOW_THREADS
success = CloseHandle(hObject);
Py_END_ALLOW_THREADS
if (!success)
return PyErr_SetFromWindowsErr(0);
Py_RETURN_NONE;
}
static PyObject *
win32_ConnectNamedPipe(PyObject *self, PyObject *args)
{
HANDLE hNamedPipe;
LPOVERLAPPED lpOverlapped;
BOOL success;
if (!PyArg_ParseTuple(args, F_HANDLE F_POINTER,
&hNamedPipe, &lpOverlapped))
return NULL;
Py_BEGIN_ALLOW_THREADS
success = ConnectNamedPipe(hNamedPipe, lpOverlapped);
Py_END_ALLOW_THREADS
if (!success)
return PyErr_SetFromWindowsErr(0);
Py_RETURN_NONE;
}
static PyObject *
win32_CreateFile(PyObject *self, PyObject *args)
{
LPCTSTR lpFileName;
DWORD dwDesiredAccess;
DWORD dwShareMode;
LPSECURITY_ATTRIBUTES lpSecurityAttributes;
DWORD dwCreationDisposition;
DWORD dwFlagsAndAttributes;
HANDLE hTemplateFile;
HANDLE handle;
if (!PyArg_ParseTuple(args, "s" F_DWORD F_DWORD F_POINTER
F_DWORD F_DWORD F_HANDLE,
&lpFileName, &dwDesiredAccess, &dwShareMode,
&lpSecurityAttributes, &dwCreationDisposition,
&dwFlagsAndAttributes, &hTemplateFile))
return NULL;
Py_BEGIN_ALLOW_THREADS
handle = CreateFile(lpFileName, dwDesiredAccess,
dwShareMode, lpSecurityAttributes,
dwCreationDisposition,
dwFlagsAndAttributes, hTemplateFile);
Py_END_ALLOW_THREADS
if (handle == INVALID_HANDLE_VALUE)
return PyErr_SetFromWindowsErr(0);
return Py_BuildValue(F_HANDLE, handle);
}
static PyObject *
win32_CreateNamedPipe(PyObject *self, PyObject *args)
{
LPCTSTR lpName;
DWORD dwOpenMode;
DWORD dwPipeMode;
DWORD nMaxInstances;
DWORD nOutBufferSize;
DWORD nInBufferSize;
DWORD nDefaultTimeOut;
LPSECURITY_ATTRIBUTES lpSecurityAttributes;
HANDLE handle;
if (!PyArg_ParseTuple(args, "s" F_DWORD F_DWORD F_DWORD
F_DWORD F_DWORD F_DWORD F_POINTER,
&lpName, &dwOpenMode, &dwPipeMode,
&nMaxInstances, &nOutBufferSize,
&nInBufferSize, &nDefaultTimeOut,
&lpSecurityAttributes))
return NULL;
Py_BEGIN_ALLOW_THREADS
handle = CreateNamedPipe(lpName, dwOpenMode, dwPipeMode,
nMaxInstances, nOutBufferSize,
nInBufferSize, nDefaultTimeOut,
lpSecurityAttributes);
Py_END_ALLOW_THREADS
if (handle == INVALID_HANDLE_VALUE)
return PyErr_SetFromWindowsErr(0);
return Py_BuildValue(F_HANDLE, handle);
}
static PyObject *
win32_ExitProcess(PyObject *self, PyObject *args)
{
UINT uExitCode;
if (!PyArg_ParseTuple(args, "I", &uExitCode))
return NULL;
#if defined(Py_DEBUG)
SetErrorMode(SEM_FAILCRITICALERRORS|SEM_NOALIGNMENTFAULTEXCEPT|SEM_NOGPFAULTERRORBOX|SEM_NOOPENFILEERRORBOX);
_CrtSetReportMode(_CRT_ASSERT, _CRTDBG_MODE_DEBUG);
#endif
ExitProcess(uExitCode);
return NULL;
}
static PyObject *
win32_GetLastError(PyObject *self, PyObject *args)
{
return Py_BuildValue(F_DWORD, GetLastError());
}
static PyObject *
win32_OpenProcess(PyObject *self, PyObject *args)
{
DWORD dwDesiredAccess;
BOOL bInheritHandle;
DWORD dwProcessId;
HANDLE handle;
if (!PyArg_ParseTuple(args, F_DWORD "i" F_DWORD,
&dwDesiredAccess, &bInheritHandle, &dwProcessId))
return NULL;
handle = OpenProcess(dwDesiredAccess, bInheritHandle, dwProcessId);
if (handle == NULL)
return PyErr_SetFromWindowsErr(0);
return Py_BuildValue(F_HANDLE, handle);
}
static PyObject *
win32_SetNamedPipeHandleState(PyObject *self, PyObject *args)
{
HANDLE hNamedPipe;
PyObject *oArgs[3];
DWORD dwArgs[3], *pArgs[3] = {NULL, NULL, NULL};
int i;
if (!PyArg_ParseTuple(args, F_HANDLE "OOO",
&hNamedPipe, &oArgs[0], &oArgs[1], &oArgs[2]))
return NULL;
PyErr_Clear();
for (i = 0 ; i < 3 ; i++) {
if (oArgs[i] != Py_None) {
dwArgs[i] = PyInt_AsUnsignedLongMask(oArgs[i]);
if (PyErr_Occurred())
return NULL;
pArgs[i] = &dwArgs[i];
}
}
if (!SetNamedPipeHandleState(hNamedPipe, pArgs[0], pArgs[1], pArgs[2]))
return PyErr_SetFromWindowsErr(0);
Py_RETURN_NONE;
}
static PyObject *
win32_WaitNamedPipe(PyObject *self, PyObject *args)
{
LPCTSTR lpNamedPipeName;
DWORD nTimeOut;
BOOL success;
if (!PyArg_ParseTuple(args, "s" F_DWORD, &lpNamedPipeName, &nTimeOut))
return NULL;
Py_BEGIN_ALLOW_THREADS
success = WaitNamedPipe(lpNamedPipeName, nTimeOut);
Py_END_ALLOW_THREADS
if (!success)
return PyErr_SetFromWindowsErr(0);
Py_RETURN_NONE;
}
static PyMethodDef win32_methods[] = {
WIN32_FUNCTION(CloseHandle),
WIN32_FUNCTION(GetLastError),
WIN32_FUNCTION(OpenProcess),
WIN32_FUNCTION(ExitProcess),
WIN32_FUNCTION(ConnectNamedPipe),
WIN32_FUNCTION(CreateFile),
WIN32_FUNCTION(CreateNamedPipe),
WIN32_FUNCTION(SetNamedPipeHandleState),
WIN32_FUNCTION(WaitNamedPipe),
{NULL}
};
PyTypeObject Win32Type = {
PyVarObject_HEAD_INIT(NULL, 0)
};
PyObject *
create_win32_namespace(void)
{
Win32Type.tp_name = "_billiard.win32";
Win32Type.tp_methods = win32_methods;
if (PyType_Ready(&Win32Type) < 0)
return NULL;
Py_INCREF(&Win32Type);
WIN32_CONSTANT(F_DWORD, ERROR_ALREADY_EXISTS);
WIN32_CONSTANT(F_DWORD, ERROR_PIPE_BUSY);
WIN32_CONSTANT(F_DWORD, ERROR_PIPE_CONNECTED);
WIN32_CONSTANT(F_DWORD, ERROR_SEM_TIMEOUT);
WIN32_CONSTANT(F_DWORD, GENERIC_READ);
WIN32_CONSTANT(F_DWORD, GENERIC_WRITE);
WIN32_CONSTANT(F_DWORD, INFINITE);
WIN32_CONSTANT(F_DWORD, NMPWAIT_WAIT_FOREVER);
WIN32_CONSTANT(F_DWORD, OPEN_EXISTING);
WIN32_CONSTANT(F_DWORD, PIPE_ACCESS_DUPLEX);
WIN32_CONSTANT(F_DWORD, PIPE_ACCESS_INBOUND);
WIN32_CONSTANT(F_DWORD, PIPE_READMODE_MESSAGE);
WIN32_CONSTANT(F_DWORD, PIPE_TYPE_MESSAGE);
WIN32_CONSTANT(F_DWORD, PIPE_UNLIMITED_INSTANCES);
WIN32_CONSTANT(F_DWORD, PIPE_WAIT);
WIN32_CONSTANT(F_DWORD, PROCESS_ALL_ACCESS);
WIN32_CONSTANT("i", NULL);
return (PyObject*)&Win32Type;
}

746
PKG-INFO Normal file
View File

@ -0,0 +1,746 @@
Metadata-Version: 1.1
Name: billiard
Version: 3.3.0.18
Summary: Python multiprocessing fork with improvements and bugfixes
Home-page: http://github.com/celery/billiard
Author: Ask Solem',
Author-email: ask@celeryproject.org
License: BSD
Description: ========
billiard
========
:version: 3.3.0.18
About
-----
`billiard` is a fork of the Python 2.7 `multiprocessing <http://docs.python.org/library/multiprocessing.html>`_
package. The multiprocessing package itself is a renamed and updated version of
R Oudkerk's `pyprocessing <http://pypi.python.org/pypi/processing/>`_ package.
This standalone variant is intended to be compatible with Python 2.4 and 2.5,
and will draw it's fixes/improvements from python-trunk.
- This package would not be possible if not for the contributions of not only
the current maintainers but all of the contributors to the original pyprocessing
package listed `here <http://pyprocessing.berlios.de/doc/THANKS.html>`_
- Also it is a fork of the multiprocessin backport package by Christian Heims.
- It includes the no-execv patch contributed by R. Oudkerk.
- And the Pool improvements previously located in `Celery`_.
.. _`Celery`: http://celeryproject.org
Bug reporting
-------------
Please report bugs related to multiprocessing at the
`Python bug tracker <http://bugs.python.org/>`_. Issues related to billiard
should be reported at http://github.com/celery/billiard/issues.
.. image:: https://d2weczhvl823v0.cloudfront.net/celery/billiard/trend.png
:alt: Bitdeli badge
:target: https://bitdeli.com/free
===========
Changes
===========
3.3.0.18 - 2014-06-20
---------------------
- Now compiles on GNU/kFreeBSD
Contributed by Michael Fladischer.
- Pool: `AF_PIPE` address fixed so that it works on recent Windows versions
in combination with Python 2.7.7.
Fix contributed by Joshua Tacoma.
- Pool: Fix for `Supervisor object has no attribute _children` error.
Fix contributed by Andres Riancho.
- Pool: Fixed bug with human_status(None).
- Pool: shrink did not work properly if asked to remove more than 1 process.
3.3.0.17 - 2014-04-16
---------------------
- Fixes SemLock on Python 3.4 (Issue #107) when using
``forking_enable(False)``.
- Pool: Include more useful exitcode information when processes exit.
3.3.0.16 - 2014-02-11
---------------------
- Previous release was missing the billiard.py3 package from MANIFEST
so the installation would not work on Python 3.
3.3.0.15 - 2014-02-10
---------------------
- Pool: Fixed "cannot join process not started" error.
- Now uses billiard.py2 and billiard.py3 specific packages that are installed
depending on the python version used.
This way the installation will not import version specific modules (and
possibly crash).
3.3.0.14 - 2014-01-17
---------------------
- Fixed problem with our backwards compatible ``bytes`` wrapper
(Issue #103).
- No longer expects frozen applications to have a valid ``__file__``
attribute.
Fix contributed by George Sibble.
3.3.0.13 - 2013-12-13
---------------------
- Fixes compatability with Python < 2.7.6
- No longer attempts to handle ``SIGBUS``
Contributed by Vishal Vatsa.
- Non-thread based pool now only handles signals:
``SIGHUP``, ``SIGQUIT``, ``SIGTERM``, ``SIGUSR1``,
``SIGUSR2``.
- setup.py: Only show compilation warning for build related commands.
3.3.0.12 - 2013-12-09
---------------------
- Fixed installation for Python 3.
Contributed by Rickert Mulder.
- Pool: Fixed bug with maxtasksperchild.
Fix contributed by Ionel Cristian Maries.
- Pool: Fixed bug in maintain_pool.
3.3.0.11 - 2013-12-03
---------------------
- Fixed Unicode error when installing the distribution (Issue #89).
- Daemonic processes are now allowed to have children.
But note that it will not be possible to automatically
terminate them when the process exits.
See discussion at https://github.com/celery/celery/issues/1709
- Pool: Would not always be able to detect that a process exited.
3.3.0.10 - 2013-12-02
---------------------
- Windows: Fixed problem with missing ``WAITABANDONED_0``
Fix contributed by Matthias Wagner
- Windows: PipeConnection can now be inherited.
Fix contributed by Matthias Wagner
3.3.0.9 - 2013-12-02
--------------------
- Temporary workaround for Celery maxtasksperchild issue.
Fix contributed by Ionel Cristian Maries.
3.3.0.8 - 2013-11-21
--------------------
- Now also sets ``multiprocessing.current_process`` for compatibility
with loggings ``processName`` field.
3.3.0.7 - 2013-11-15
--------------------
- Fixed compatibility with PyPy 2.1 + 2.2.
- Fixed problem in pypy detection.
Fix contributed by Tin Tvrtkovic.
- Now uses ``ctypes.find_library`` instead of hardcoded path to find
the OS X CoreServices framework.
Fix contributed by Moritz Kassner.
3.3.0.6 - 2013-11-12
--------------------
- Now works without C extension again.
- New ``_billiard.read(fd, buffer, [len, ])`` function
implements os.read with buffer support (new buffer API)
- New pure-python implementation of ``Connection.send_offset``.
3.3.0.5 - 2013-11-11
--------------------
- All platforms except for Windows/PyPy/Jython now requires the C extension.
3.3.0.4 - 2013-11-11
--------------------
- Fixed problem with Python3 and setblocking.
3.3.0.3 - 2013-11-09
--------------------
- Now works on Windows again.
3.3.0.2 - 2013-11-08
--------------------
- ApplyResult.terminate() may be set to signify that the job
must not be executed. It can be used in combination with
Pool.terminate_job.
- Pipe/_SimpleQueue: Now supports rnonblock/wnonblock arguments
to set the read or write end of the pipe to be nonblocking.
- Pool: Log message included exception info but exception happened
in another process so the resulting traceback was wrong.
- Pool: Worker process can now prepare results before they are sent
back to the main process (using ``Worker.prepare_result``).
3.3.0.1 - 2013-11-04
--------------------
- Pool: New ``correlation_id`` argument to ``apply_async`` can be
used to set a related id for the ``ApplyResult`` object returned:
>>> r = pool.apply_async(target, args, kwargs, correlation_id='foo')
>>> r.correlation_id
'foo'
- Pool: New callback `on_process_exit` is called when a pool
process exits, with signature ``(pid, exitcode)``.
Contributed by Daniel M. Taub.
- Pool: Improved the too many restarts detection.
3.3.0.0 - 2013-10-14
--------------------
- Dual code base now runs on Python 2.6+ and Python 3.
- No longer compatible with Python 2.5
- Includes many changes from multiprocessing in 3.4.
- Now uses ``time.monotonic`` when available, also including
fallback implementations for Linux and OS X.
- No longer cleans up after receiving SIGILL, SIGSEGV or SIGFPE
Contributed by Kevin Blackham
- ``Finalize`` and ``register_after_fork`` is now aliases to multiprocessing.
It's better to import these from multiprocessing directly now
so that there aren't multiple registries.
- New `billiard.queues._SimpleQueue` that does not use semaphores.
- Pool: Can now be extended to support using multiple IPC queues.
- Pool: Can now use async I/O to write to pool IPC queues.
- Pool: New ``Worker.on_loop_stop`` handler can be used to add actions
at pool worker process shutdown.
Note that, like all finalization handlers, there is no guarantee that
this will be executed.
Contributed by dmtaub.
2.7.3.30 - 2013-06-28
---------------------
- Fixed ImportError in billiard._ext
2.7.3.29 - 2013-06-28
---------------------
- Compilation: Fixed improper handling of HAVE_SEM_OPEN (Issue #55)
Fix contributed by Krzysztof Jagiello.
- Process now releases logging locks after fork.
This previously happened in Pool, but it was done too late
as processes logs when they bootstrap.
- Pool.terminate_job now ignores `No such process` errors.
- billiard.Pool entrypoint did not support new arguments
to billiard.pool.Pool
- Connection inbound buffer size increased from 1kb to 128kb.
- C extension cleaned up by properly adding a namespace to symbols.
- _exit_function now works even if thread wakes up after gc collect.
2.7.3.28 - 2013-04-16
---------------------
- Pool: Fixed regression that disabled the deadlock
fix in 2.7.3.24
- Pool: RestartFreqExceeded could be raised prematurely.
- Process: Include pid in startup and process INFO logs.
2.7.3.27 - 2013-04-12
---------------------
- Manager now works again.
- Python 3 fixes for billiard.connection.
- Fixed invalid argument bug when running on Python 3.3
Fix contributed by Nathan Wan.
- Ignore OSError when setting up signal handlers.
2.7.3.26 - 2013-04-09
---------------------
- Pool: Child processes must ignore SIGINT.
2.7.3.25 - 2013-04-09
---------------------
- Pool: 2.7.3.24 broke support for subprocesses (Issue #48).
Signals that should be ignored were instead handled
by terminating.
2.7.3.24 - 2013-04-08
---------------------
- Pool: Make sure finally blocks are called when process exits
due to a signal.
This fixes a deadlock problem when the process is killed
while having acquired the shared semaphore. However, this solution
does not protect against the processes being killed, a more elaborate
solution is required for that. Hopefully this will be fixed soon in a
later version.
- Pool: Can now use GDB to debug pool child processes.
- Fixes Python 3 compatibility problems.
Contributed by Albertas Agejevas.
2.7.3.23 - 2013-03-22
---------------------
- Windows: Now catches SystemExit from setuptools while trying to build
the C extension (Issue #41).
2.7.3.22 - 2013-03-08
---------------------
- Pool: apply_async now supports a ``callbacks_propagate`` keyword
argument that can be a tuple of exceptions to propagate in callbacks.
(callback, errback, accept_callback, timeout_callback).
- Errors are no longer logged for OK and recycle exit codes.
This would cause normal maxtasksperchild recycled process
to log an error.
- Fixed Python 2.5 compatibility problem (Issue #33).
- FreeBSD: Compilation now disables semaphores if Python was built
without it (Issue #40).
Contributed by William Grzybowski
2.7.3.21 - 2013-02-11
---------------------
- Fixed typo EX_REUSE -> EX_RECYCLE
- Code now conforms to new pep8.py rules.
2.7.3.20 - 2013-02-08
---------------------
- Pool: Disable restart limit if maxR is not set.
- Pool: Now uses os.kill instead of signal.signal.
Contributed by Lukasz Langa
- Fixed name error in process.py
- Pool: ApplyResult.get now properly raises exceptions.
Fix contributed by xentac.
2.7.3.19 - 2012-11-30
---------------------
- Fixes problem at shutdown when gc has collected symbols.
- Pool now always uses _kill for Py2.5 compatibility on Windows (Issue #32).
- Fixes Python 3 compatibility issues
2.7.3.18 - 2012-11-05
---------------------
- [Pool] Fix for check_timeouts if not set.
Fix contributed by Dmitry Sukhov
- Fixed pickle problem with Traceback.
Code.frame.__loader__ is now ignored as it may be set to
an unpickleable object.
- The Django old-layout warning was always showing.
2.7.3.17 - 2012-09-26
---------------------
- Fixes typo
2.7.3.16 - 2012-09-26
---------------------
- Windows: Fixes for SemLock._rebuild (Issue #24).
- Pool: Job terminated with terminate_job now raises
billiard.exceptions.Terminated.
2.7.3.15 - 2012-09-21
---------------------
- Windows: Fixes unpickling of SemLock when using fallback.
- Windows: Fixes installation when no C compiler.
2.7.3.14 - 2012-09-20
---------------------
- Installation now works again for Python 3.
2.7.3.13 - 2012-09-14
---------------------
- Merged with Python trunk (many authors, many fixes: see Python changelog in
trunk).
- Using execv now also works with older Django projects using setup_environ
(Issue #10).
- Billiard now installs with a warning that the C extension could not be built
if a compiler is not installed or the build fails in some other way.
It really is recommended to have the C extension installed when running
with force execv, but this change also makes it easier to install.
- Pool: Hard timeouts now sends KILL shortly after TERM so that C extensions
cannot block the signal.
Python signal handlers are called in the interpreter, so they cannot
be called while a C extension is blocking the interpreter from running.
- Now uses a timeout value for Thread.join that doesn't exceed the maximum
on some platforms.
- Fixed bug in the SemLock fallback used when C extensions not installed.
Fix contributed by Mher Movsisyan.
- Pool: Now sets a Process.index attribute for every process in the pool.
This number will always be between 0 and concurrency-1, and
can be used to e.g. create a logfile for each process in the pool
without creating a new logfile whenever a process is replaced.
2.7.3.12 - 2012-08-05
---------------------
- Fixed Python 2.5 compatibility issue.
- New Pool.terminate_job(pid) to terminate a job without raising WorkerLostError
2.7.3.11 - 2012-08-01
---------------------
- Adds support for FreeBSD 7+
Fix contributed by koobs.
- Pool: New argument ``allow_restart`` is now required to enable
the pool process sentinel that is required to restart the pool.
It's disabled by default, which reduces the number of file
descriptors/semaphores required to run the pool.
- Pool: Now emits a warning if a worker process exited with error-code.
But not if the error code is 155, which is now returned if the worker
process was recycled (maxtasksperchild).
- Python 3 compatibility fixes.
- Python 2.5 compatibility fixes.
2.7.3.10 - 2012-06-26
---------------------
- The ``TimeLimitExceeded`` exception string representation
only included the seconds as a number, it now gives a more human
friendly description.
- Fixed typo in ``LaxBoundedSemaphore.shrink``.
- Pool: ``ResultHandler.handle_event`` no longer requires
any arguments.
- setup.py bdist now works
2.7.3.9 - 2012-06-03
--------------------
- Environment variable ``MP_MAIN_FILE`` envvar is now set to
the path of the ``__main__`` module when execv is enabled.
- Pool: Errors occurring in the TaskHandler are now reported.
2.7.3.8 - 2012-06-01
--------------------
- Can now be installed on Py 3.2
- Issue #12091: simplify ApplyResult and MapResult with threading.Event
Patch by Charles-Francois Natali
- Pool: Support running without TimeoutHandler thread.
- The with_*_thread arguments has also been replaced with
a single `threads=True` argument.
- Two new pool callbacks:
- ``on_timeout_set(job, soft, hard)``
Applied when a task is executed with a timeout.
- ``on_timeout_cancel(job)``
Applied when a timeout is cancelled (the job completed)
2.7.3.7 - 2012-05-21
--------------------
- Fixes Python 2.5 support.
2.7.3.6 - 2012-05-21
--------------------
- Pool: Can now be used in an event loop, without starting the supporting
threads (TimeoutHandler still not supported)
To facilitate this the pool has gained the following keyword arguments:
- ``with_task_thread``
- ``with_result_thread``
- ``with_supervisor_thread``
- ``on_process_up``
Callback called with Process instance as argument
whenever a new worker process is added.
Used to add new process fds to the eventloop::
def on_process_up(proc):
hub.add_reader(proc.sentinel, pool.maintain_pool)
- ``on_process_down``
Callback called with Process instance as argument
whenever a new worker process is found dead.
Used to remove process fds from the eventloop::
def on_process_down(proc):
hub.remove(proc.sentinel)
- ``semaphore``
Sets the semaphore used to protect from adding new items to the
pool when no processes available. The default is a threaded
one, so this can be used to change to an async semaphore.
And the following attributes::
- ``readers``
A map of ``fd`` -> ``callback``, to be registered in an eventloop.
Currently this is only the result outqueue with a callback
that processes all currently incoming results.
And the following methods::
- ``did_start_ok``
To be called after starting the pool, and after setting up the
eventloop with the pool fds, to ensure that the worker processes
didn't immediately exit caused by an error (internal/memory).
- ``maintain_pool``
Public version of ``_maintain_pool`` that handles max restarts.
- Pool: Process too frequent restart protection now only counts if the process
had a non-successful exit-code.
This to take into account the maxtasksperchild option, and allowing
processes to exit cleanly on their own.
- Pool: New options max_restart + max_restart_freq
This means that the supervisor can't restart processes
faster than max_restart' times per max_restart_freq seconds
(like the Erlang supervisor maxR & maxT settings).
The pool is closed and joined if the max restart
frequency is exceeded, where previously it would keep restarting
at an unlimited rate, possibly crashing the system.
The current default value is to stop if it exceeds
100 * process_count restarts in 1 seconds. This may change later.
It will only count processes with an unsuccessful exit code,
this is to take into account the ``maxtasksperchild`` setting
and code that voluntarily exits.
- Pool: The ``WorkerLostError`` message now includes the exit-code of the
process that disappeared.
2.7.3.5 - 2012-05-09
--------------------
- Now always cleans up after ``sys.exc_info()`` to avoid
cyclic references.
- ExceptionInfo without arguments now defaults to ``sys.exc_info``.
- Forking can now be disabled using the
``MULTIPROCESSING_FORKING_DISABLE`` environment variable.
Also this envvar is set so that the behavior is inherited
after execv.
- The semaphore cleanup process started when execv is used
now sets a useful process name if the ``setproctitle``
module is installed.
- Sets the ``FORKED_BY_MULTIPROCESSING``
environment variable if forking is disabled.
2.7.3.4 - 2012-04-27
--------------------
- Added `billiard.ensure_multiprocessing()`
Raises NotImplementedError if the platform does not support
multiprocessing (e.g. Jython).
2.7.3.3 - 2012-04-23
--------------------
- PyPy now falls back to using its internal _multiprocessing module,
so everything works except for forking_enable(False) (which
silently degrades).
- Fixed Python 2.5 compat. issues.
- Uses more with statements
- Merged some of the changes from the Python 3 branch.
2.7.3.2 - 2012-04-20
--------------------
- Now installs on PyPy/Jython (but does not work).
2.7.3.1 - 2012-04-20
--------------------
- Python 2.5 support added.
2.7.3.0 - 2012-04-20
--------------------
- Updated from Python 2.7.3
- Python 2.4 support removed, now only supports 2.5, 2.6 and 2.7.
(may consider py3k support at some point).
- Pool improvements from Celery.
- no-execv patch added (http://bugs.python.org/issue8713)
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python
Classifier: Programming Language :: C
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.5
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.2
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: Jython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: POSIX
Classifier: License :: OSI Approved :: BSD License
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Distributed Computing

38
README.rst Normal file
View File

@ -0,0 +1,38 @@
========
billiard
========
:version: 3.3.0.18
About
-----
`billiard` is a fork of the Python 2.7 `multiprocessing <http://docs.python.org/library/multiprocessing.html>`_
package. The multiprocessing package itself is a renamed and updated version of
R Oudkerk's `pyprocessing <http://pypi.python.org/pypi/processing/>`_ package.
This standalone variant is intended to be compatible with Python 2.4 and 2.5,
and will draw it's fixes/improvements from python-trunk.
- This package would not be possible if not for the contributions of not only
the current maintainers but all of the contributors to the original pyprocessing
package listed `here <http://pyprocessing.berlios.de/doc/THANKS.html>`_
- Also it is a fork of the multiprocessin backport package by Christian Heims.
- It includes the no-execv patch contributed by R. Oudkerk.
- And the Pool improvements previously located in `Celery`_.
.. _`Celery`: http://celeryproject.org
Bug reporting
-------------
Please report bugs related to multiprocessing at the
`Python bug tracker <http://bugs.python.org/>`_. Issues related to billiard
should be reported at http://github.com/celery/billiard/issues.
.. image:: https://d2weczhvl823v0.cloudfront.net/celery/billiard/trend.png
:alt: Bitdeli badge
:target: https://bitdeli.com/free

746
billiard.egg-info/PKG-INFO Normal file
View File

@ -0,0 +1,746 @@
Metadata-Version: 1.1
Name: billiard
Version: 3.3.0.18
Summary: Python multiprocessing fork with improvements and bugfixes
Home-page: http://github.com/celery/billiard
Author: Ask Solem',
Author-email: ask@celeryproject.org
License: BSD
Description: ========
billiard
========
:version: 3.3.0.18
About
-----
`billiard` is a fork of the Python 2.7 `multiprocessing <http://docs.python.org/library/multiprocessing.html>`_
package. The multiprocessing package itself is a renamed and updated version of
R Oudkerk's `pyprocessing <http://pypi.python.org/pypi/processing/>`_ package.
This standalone variant is intended to be compatible with Python 2.4 and 2.5,
and will draw it's fixes/improvements from python-trunk.
- This package would not be possible if not for the contributions of not only
the current maintainers but all of the contributors to the original pyprocessing
package listed `here <http://pyprocessing.berlios.de/doc/THANKS.html>`_
- Also it is a fork of the multiprocessin backport package by Christian Heims.
- It includes the no-execv patch contributed by R. Oudkerk.
- And the Pool improvements previously located in `Celery`_.
.. _`Celery`: http://celeryproject.org
Bug reporting
-------------
Please report bugs related to multiprocessing at the
`Python bug tracker <http://bugs.python.org/>`_. Issues related to billiard
should be reported at http://github.com/celery/billiard/issues.
.. image:: https://d2weczhvl823v0.cloudfront.net/celery/billiard/trend.png
:alt: Bitdeli badge
:target: https://bitdeli.com/free
===========
Changes
===========
3.3.0.18 - 2014-06-20
---------------------
- Now compiles on GNU/kFreeBSD
Contributed by Michael Fladischer.
- Pool: `AF_PIPE` address fixed so that it works on recent Windows versions
in combination with Python 2.7.7.
Fix contributed by Joshua Tacoma.
- Pool: Fix for `Supervisor object has no attribute _children` error.
Fix contributed by Andres Riancho.
- Pool: Fixed bug with human_status(None).
- Pool: shrink did not work properly if asked to remove more than 1 process.
3.3.0.17 - 2014-04-16
---------------------
- Fixes SemLock on Python 3.4 (Issue #107) when using
``forking_enable(False)``.
- Pool: Include more useful exitcode information when processes exit.
3.3.0.16 - 2014-02-11
---------------------
- Previous release was missing the billiard.py3 package from MANIFEST
so the installation would not work on Python 3.
3.3.0.15 - 2014-02-10
---------------------
- Pool: Fixed "cannot join process not started" error.
- Now uses billiard.py2 and billiard.py3 specific packages that are installed
depending on the python version used.
This way the installation will not import version specific modules (and
possibly crash).
3.3.0.14 - 2014-01-17
---------------------
- Fixed problem with our backwards compatible ``bytes`` wrapper
(Issue #103).
- No longer expects frozen applications to have a valid ``__file__``
attribute.
Fix contributed by George Sibble.
3.3.0.13 - 2013-12-13
---------------------
- Fixes compatability with Python < 2.7.6
- No longer attempts to handle ``SIGBUS``
Contributed by Vishal Vatsa.
- Non-thread based pool now only handles signals:
``SIGHUP``, ``SIGQUIT``, ``SIGTERM``, ``SIGUSR1``,
``SIGUSR2``.
- setup.py: Only show compilation warning for build related commands.
3.3.0.12 - 2013-12-09
---------------------
- Fixed installation for Python 3.
Contributed by Rickert Mulder.
- Pool: Fixed bug with maxtasksperchild.
Fix contributed by Ionel Cristian Maries.
- Pool: Fixed bug in maintain_pool.
3.3.0.11 - 2013-12-03
---------------------
- Fixed Unicode error when installing the distribution (Issue #89).
- Daemonic processes are now allowed to have children.
But note that it will not be possible to automatically
terminate them when the process exits.
See discussion at https://github.com/celery/celery/issues/1709
- Pool: Would not always be able to detect that a process exited.
3.3.0.10 - 2013-12-02
---------------------
- Windows: Fixed problem with missing ``WAITABANDONED_0``
Fix contributed by Matthias Wagner
- Windows: PipeConnection can now be inherited.
Fix contributed by Matthias Wagner
3.3.0.9 - 2013-12-02
--------------------
- Temporary workaround for Celery maxtasksperchild issue.
Fix contributed by Ionel Cristian Maries.
3.3.0.8 - 2013-11-21
--------------------
- Now also sets ``multiprocessing.current_process`` for compatibility
with loggings ``processName`` field.
3.3.0.7 - 2013-11-15
--------------------
- Fixed compatibility with PyPy 2.1 + 2.2.
- Fixed problem in pypy detection.
Fix contributed by Tin Tvrtkovic.
- Now uses ``ctypes.find_library`` instead of hardcoded path to find
the OS X CoreServices framework.
Fix contributed by Moritz Kassner.
3.3.0.6 - 2013-11-12
--------------------
- Now works without C extension again.
- New ``_billiard.read(fd, buffer, [len, ])`` function
implements os.read with buffer support (new buffer API)
- New pure-python implementation of ``Connection.send_offset``.
3.3.0.5 - 2013-11-11
--------------------
- All platforms except for Windows/PyPy/Jython now requires the C extension.
3.3.0.4 - 2013-11-11
--------------------
- Fixed problem with Python3 and setblocking.
3.3.0.3 - 2013-11-09
--------------------
- Now works on Windows again.
3.3.0.2 - 2013-11-08
--------------------
- ApplyResult.terminate() may be set to signify that the job
must not be executed. It can be used in combination with
Pool.terminate_job.
- Pipe/_SimpleQueue: Now supports rnonblock/wnonblock arguments
to set the read or write end of the pipe to be nonblocking.
- Pool: Log message included exception info but exception happened
in another process so the resulting traceback was wrong.
- Pool: Worker process can now prepare results before they are sent
back to the main process (using ``Worker.prepare_result``).
3.3.0.1 - 2013-11-04
--------------------
- Pool: New ``correlation_id`` argument to ``apply_async`` can be
used to set a related id for the ``ApplyResult`` object returned:
>>> r = pool.apply_async(target, args, kwargs, correlation_id='foo')
>>> r.correlation_id
'foo'
- Pool: New callback `on_process_exit` is called when a pool
process exits, with signature ``(pid, exitcode)``.
Contributed by Daniel M. Taub.
- Pool: Improved the too many restarts detection.
3.3.0.0 - 2013-10-14
--------------------
- Dual code base now runs on Python 2.6+ and Python 3.
- No longer compatible with Python 2.5
- Includes many changes from multiprocessing in 3.4.
- Now uses ``time.monotonic`` when available, also including
fallback implementations for Linux and OS X.
- No longer cleans up after receiving SIGILL, SIGSEGV or SIGFPE
Contributed by Kevin Blackham
- ``Finalize`` and ``register_after_fork`` is now aliases to multiprocessing.
It's better to import these from multiprocessing directly now
so that there aren't multiple registries.
- New `billiard.queues._SimpleQueue` that does not use semaphores.
- Pool: Can now be extended to support using multiple IPC queues.
- Pool: Can now use async I/O to write to pool IPC queues.
- Pool: New ``Worker.on_loop_stop`` handler can be used to add actions
at pool worker process shutdown.
Note that, like all finalization handlers, there is no guarantee that
this will be executed.
Contributed by dmtaub.
2.7.3.30 - 2013-06-28
---------------------
- Fixed ImportError in billiard._ext
2.7.3.29 - 2013-06-28
---------------------
- Compilation: Fixed improper handling of HAVE_SEM_OPEN (Issue #55)
Fix contributed by Krzysztof Jagiello.
- Process now releases logging locks after fork.
This previously happened in Pool, but it was done too late
as processes logs when they bootstrap.
- Pool.terminate_job now ignores `No such process` errors.
- billiard.Pool entrypoint did not support new arguments
to billiard.pool.Pool
- Connection inbound buffer size increased from 1kb to 128kb.
- C extension cleaned up by properly adding a namespace to symbols.
- _exit_function now works even if thread wakes up after gc collect.
2.7.3.28 - 2013-04-16
---------------------
- Pool: Fixed regression that disabled the deadlock
fix in 2.7.3.24
- Pool: RestartFreqExceeded could be raised prematurely.
- Process: Include pid in startup and process INFO logs.
2.7.3.27 - 2013-04-12
---------------------
- Manager now works again.
- Python 3 fixes for billiard.connection.
- Fixed invalid argument bug when running on Python 3.3
Fix contributed by Nathan Wan.
- Ignore OSError when setting up signal handlers.
2.7.3.26 - 2013-04-09
---------------------
- Pool: Child processes must ignore SIGINT.
2.7.3.25 - 2013-04-09
---------------------
- Pool: 2.7.3.24 broke support for subprocesses (Issue #48).
Signals that should be ignored were instead handled
by terminating.
2.7.3.24 - 2013-04-08
---------------------
- Pool: Make sure finally blocks are called when process exits
due to a signal.
This fixes a deadlock problem when the process is killed
while having acquired the shared semaphore. However, this solution
does not protect against the processes being killed, a more elaborate
solution is required for that. Hopefully this will be fixed soon in a
later version.
- Pool: Can now use GDB to debug pool child processes.
- Fixes Python 3 compatibility problems.
Contributed by Albertas Agejevas.
2.7.3.23 - 2013-03-22
---------------------
- Windows: Now catches SystemExit from setuptools while trying to build
the C extension (Issue #41).
2.7.3.22 - 2013-03-08
---------------------
- Pool: apply_async now supports a ``callbacks_propagate`` keyword
argument that can be a tuple of exceptions to propagate in callbacks.
(callback, errback, accept_callback, timeout_callback).
- Errors are no longer logged for OK and recycle exit codes.
This would cause normal maxtasksperchild recycled process
to log an error.
- Fixed Python 2.5 compatibility problem (Issue #33).
- FreeBSD: Compilation now disables semaphores if Python was built
without it (Issue #40).
Contributed by William Grzybowski
2.7.3.21 - 2013-02-11
---------------------
- Fixed typo EX_REUSE -> EX_RECYCLE
- Code now conforms to new pep8.py rules.
2.7.3.20 - 2013-02-08
---------------------
- Pool: Disable restart limit if maxR is not set.
- Pool: Now uses os.kill instead of signal.signal.
Contributed by Lukasz Langa
- Fixed name error in process.py
- Pool: ApplyResult.get now properly raises exceptions.
Fix contributed by xentac.
2.7.3.19 - 2012-11-30
---------------------
- Fixes problem at shutdown when gc has collected symbols.
- Pool now always uses _kill for Py2.5 compatibility on Windows (Issue #32).
- Fixes Python 3 compatibility issues
2.7.3.18 - 2012-11-05
---------------------
- [Pool] Fix for check_timeouts if not set.
Fix contributed by Dmitry Sukhov
- Fixed pickle problem with Traceback.
Code.frame.__loader__ is now ignored as it may be set to
an unpickleable object.
- The Django old-layout warning was always showing.
2.7.3.17 - 2012-09-26
---------------------
- Fixes typo
2.7.3.16 - 2012-09-26
---------------------
- Windows: Fixes for SemLock._rebuild (Issue #24).
- Pool: Job terminated with terminate_job now raises
billiard.exceptions.Terminated.
2.7.3.15 - 2012-09-21
---------------------
- Windows: Fixes unpickling of SemLock when using fallback.
- Windows: Fixes installation when no C compiler.
2.7.3.14 - 2012-09-20
---------------------
- Installation now works again for Python 3.
2.7.3.13 - 2012-09-14
---------------------
- Merged with Python trunk (many authors, many fixes: see Python changelog in
trunk).
- Using execv now also works with older Django projects using setup_environ
(Issue #10).
- Billiard now installs with a warning that the C extension could not be built
if a compiler is not installed or the build fails in some other way.
It really is recommended to have the C extension installed when running
with force execv, but this change also makes it easier to install.
- Pool: Hard timeouts now sends KILL shortly after TERM so that C extensions
cannot block the signal.
Python signal handlers are called in the interpreter, so they cannot
be called while a C extension is blocking the interpreter from running.
- Now uses a timeout value for Thread.join that doesn't exceed the maximum
on some platforms.
- Fixed bug in the SemLock fallback used when C extensions not installed.
Fix contributed by Mher Movsisyan.
- Pool: Now sets a Process.index attribute for every process in the pool.
This number will always be between 0 and concurrency-1, and
can be used to e.g. create a logfile for each process in the pool
without creating a new logfile whenever a process is replaced.
2.7.3.12 - 2012-08-05
---------------------
- Fixed Python 2.5 compatibility issue.
- New Pool.terminate_job(pid) to terminate a job without raising WorkerLostError
2.7.3.11 - 2012-08-01
---------------------
- Adds support for FreeBSD 7+
Fix contributed by koobs.
- Pool: New argument ``allow_restart`` is now required to enable
the pool process sentinel that is required to restart the pool.
It's disabled by default, which reduces the number of file
descriptors/semaphores required to run the pool.
- Pool: Now emits a warning if a worker process exited with error-code.
But not if the error code is 155, which is now returned if the worker
process was recycled (maxtasksperchild).
- Python 3 compatibility fixes.
- Python 2.5 compatibility fixes.
2.7.3.10 - 2012-06-26
---------------------
- The ``TimeLimitExceeded`` exception string representation
only included the seconds as a number, it now gives a more human
friendly description.
- Fixed typo in ``LaxBoundedSemaphore.shrink``.
- Pool: ``ResultHandler.handle_event`` no longer requires
any arguments.
- setup.py bdist now works
2.7.3.9 - 2012-06-03
--------------------
- Environment variable ``MP_MAIN_FILE`` envvar is now set to
the path of the ``__main__`` module when execv is enabled.
- Pool: Errors occurring in the TaskHandler are now reported.
2.7.3.8 - 2012-06-01
--------------------
- Can now be installed on Py 3.2
- Issue #12091: simplify ApplyResult and MapResult with threading.Event
Patch by Charles-Francois Natali
- Pool: Support running without TimeoutHandler thread.
- The with_*_thread arguments has also been replaced with
a single `threads=True` argument.
- Two new pool callbacks:
- ``on_timeout_set(job, soft, hard)``
Applied when a task is executed with a timeout.
- ``on_timeout_cancel(job)``
Applied when a timeout is cancelled (the job completed)
2.7.3.7 - 2012-05-21
--------------------
- Fixes Python 2.5 support.
2.7.3.6 - 2012-05-21
--------------------
- Pool: Can now be used in an event loop, without starting the supporting
threads (TimeoutHandler still not supported)
To facilitate this the pool has gained the following keyword arguments:
- ``with_task_thread``
- ``with_result_thread``
- ``with_supervisor_thread``
- ``on_process_up``
Callback called with Process instance as argument
whenever a new worker process is added.
Used to add new process fds to the eventloop::
def on_process_up(proc):
hub.add_reader(proc.sentinel, pool.maintain_pool)
- ``on_process_down``
Callback called with Process instance as argument
whenever a new worker process is found dead.
Used to remove process fds from the eventloop::
def on_process_down(proc):
hub.remove(proc.sentinel)
- ``semaphore``
Sets the semaphore used to protect from adding new items to the
pool when no processes available. The default is a threaded
one, so this can be used to change to an async semaphore.
And the following attributes::
- ``readers``
A map of ``fd`` -> ``callback``, to be registered in an eventloop.
Currently this is only the result outqueue with a callback
that processes all currently incoming results.
And the following methods::
- ``did_start_ok``
To be called after starting the pool, and after setting up the
eventloop with the pool fds, to ensure that the worker processes
didn't immediately exit caused by an error (internal/memory).
- ``maintain_pool``
Public version of ``_maintain_pool`` that handles max restarts.
- Pool: Process too frequent restart protection now only counts if the process
had a non-successful exit-code.
This to take into account the maxtasksperchild option, and allowing
processes to exit cleanly on their own.
- Pool: New options max_restart + max_restart_freq
This means that the supervisor can't restart processes
faster than max_restart' times per max_restart_freq seconds
(like the Erlang supervisor maxR & maxT settings).
The pool is closed and joined if the max restart
frequency is exceeded, where previously it would keep restarting
at an unlimited rate, possibly crashing the system.
The current default value is to stop if it exceeds
100 * process_count restarts in 1 seconds. This may change later.
It will only count processes with an unsuccessful exit code,
this is to take into account the ``maxtasksperchild`` setting
and code that voluntarily exits.
- Pool: The ``WorkerLostError`` message now includes the exit-code of the
process that disappeared.
2.7.3.5 - 2012-05-09
--------------------
- Now always cleans up after ``sys.exc_info()`` to avoid
cyclic references.
- ExceptionInfo without arguments now defaults to ``sys.exc_info``.
- Forking can now be disabled using the
``MULTIPROCESSING_FORKING_DISABLE`` environment variable.
Also this envvar is set so that the behavior is inherited
after execv.
- The semaphore cleanup process started when execv is used
now sets a useful process name if the ``setproctitle``
module is installed.
- Sets the ``FORKED_BY_MULTIPROCESSING``
environment variable if forking is disabled.
2.7.3.4 - 2012-04-27
--------------------
- Added `billiard.ensure_multiprocessing()`
Raises NotImplementedError if the platform does not support
multiprocessing (e.g. Jython).
2.7.3.3 - 2012-04-23
--------------------
- PyPy now falls back to using its internal _multiprocessing module,
so everything works except for forking_enable(False) (which
silently degrades).
- Fixed Python 2.5 compat. issues.
- Uses more with statements
- Merged some of the changes from the Python 3 branch.
2.7.3.2 - 2012-04-20
--------------------
- Now installs on PyPy/Jython (but does not work).
2.7.3.1 - 2012-04-20
--------------------
- Python 2.5 support added.
2.7.3.0 - 2012-04-20
--------------------
- Updated from Python 2.7.3
- Python 2.4 support removed, now only supports 2.5, 2.6 and 2.7.
(may consider py3k support at some point).
- Pool improvements from Celery.
- no-execv patch added (http://bugs.python.org/issue8713)
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python
Classifier: Programming Language :: C
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.5
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.2
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: Jython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: POSIX
Classifier: License :: OSI Approved :: BSD License
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Distributed Computing

View File

@ -0,0 +1,70 @@
CHANGES.txt
INSTALL.txt
LICENSE.txt
MANIFEST.in
Makefile
README.rst
setup.cfg
setup.py
Doc/conf.py
Doc/glossary.rst
Doc/index.rst
Doc/includes/__init__.py
Doc/includes/mp_benchmarks.py
Doc/includes/mp_newtype.py
Doc/includes/mp_pool.py
Doc/includes/mp_synchronize.py
Doc/includes/mp_webserver.py
Doc/includes/mp_workers.py
Doc/library/multiprocessing.rst
Modules/_billiard/connection.h
Modules/_billiard/multiprocessing.c
Modules/_billiard/multiprocessing.h
Modules/_billiard/pipe_connection.c
Modules/_billiard/semaphore.c
Modules/_billiard/socket_connection.c
Modules/_billiard/win32_functions.c
billiard/__init__.py
billiard/_ext.py
billiard/_win.py
billiard/common.py
billiard/compat.py
billiard/connection.py
billiard/einfo.py
billiard/exceptions.py
billiard/five.py
billiard/forking.py
billiard/heap.py
billiard/managers.py
billiard/pool.py
billiard/process.py
billiard/queues.py
billiard/reduction.py
billiard/sharedctypes.py
billiard/synchronize.py
billiard/util.py
billiard.egg-info/PKG-INFO
billiard.egg-info/SOURCES.txt
billiard.egg-info/dependency_links.txt
billiard.egg-info/not-zip-safe
billiard.egg-info/top_level.txt
billiard/dummy/__init__.py
billiard/dummy/connection.py
billiard/py2/__init__.py
billiard/py2/connection.py
billiard/py2/reduction.py
billiard/py3/__init__.py
billiard/py3/connection.py
billiard/py3/reduction.py
billiard/tests/__init__.py
billiard/tests/compat.py
billiard/tests/test_common.py
billiard/tests/test_package.py
billiard/tests/utils.py
funtests/__init__.py
funtests/setup.py
funtests/tests/__init__.py
funtests/tests/test_multiprocessing.py
requirements/test-ci.txt
requirements/test.txt
requirements/test3.txt

View File

@ -0,0 +1 @@

View File

@ -0,0 +1 @@

View File

@ -0,0 +1,3 @@
funtests
billiard
_billiard

327
billiard/__init__.py Normal file
View File

@ -0,0 +1,327 @@
"""Python multiprocessing fork with improvements and bugfixes"""
#
# Package analogous to 'threading.py' but using processes
#
# multiprocessing/__init__.py
#
# This package is intended to duplicate the functionality (and much of
# the API) of threading.py but uses processes instead of threads. A
# subpackage 'multiprocessing.dummy' has the same API but is a simple
# wrapper for 'threading'.
#
# Try calling `multiprocessing.doc.main()` to read the html
# documentation in a webbrowser.
#
#
# Copyright (c) 2006-2008, R Oudkerk
# Licensed to PSF under a Contributor Agreement.
#
from __future__ import absolute_import
VERSION = (3, 3, 0, 18)
__version__ = '.'.join(map(str, VERSION[0:4])) + "".join(VERSION[4:])
__author__ = 'R Oudkerk / Python Software Foundation'
__author_email__ = 'python-dev@python.org'
__maintainer__ = 'Ask Solem',
__contact__ = "ask@celeryproject.org"
__homepage__ = "http://github.com/celery/billiard"
__docformat__ = "restructuredtext"
# -eof meta-
__all__ = [
'Process', 'current_process', 'active_children', 'freeze_support',
'Manager', 'Pipe', 'cpu_count', 'log_to_stderr', 'get_logger',
'allow_connection_pickling', 'BufferTooShort', 'TimeoutError',
'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Condition',
'Event', 'Queue', 'JoinableQueue', 'Pool', 'Value', 'Array',
'RawValue', 'RawArray', 'SUBDEBUG', 'SUBWARNING', 'set_executable',
'forking_enable', 'forking_is_enabled'
]
#
# Imports
#
import os
import sys
import warnings
from .exceptions import ( # noqa
ProcessError,
BufferTooShort,
TimeoutError,
AuthenticationError,
TimeLimitExceeded,
SoftTimeLimitExceeded,
WorkerLostError,
)
from .process import Process, current_process, active_children
from .util import SUBDEBUG, SUBWARNING
def ensure_multiprocessing():
from ._ext import ensure_multiprocessing
return ensure_multiprocessing()
W_NO_EXECV = """\
force_execv is not supported as the billiard C extension \
is not installed\
"""
#
# Definitions not depending on native semaphores
#
def Manager():
'''
Returns a manager associated with a running server process
The managers methods such as `Lock()`, `Condition()` and `Queue()`
can be used to create shared objects.
'''
from .managers import SyncManager
m = SyncManager()
m.start()
return m
def Pipe(duplex=True, rnonblock=False, wnonblock=False):
'''
Returns two connection object connected by a pipe
'''
from billiard.connection import Pipe
return Pipe(duplex, rnonblock, wnonblock)
def cpu_count():
'''
Returns the number of CPUs in the system
'''
if sys.platform == 'win32':
try:
num = int(os.environ['NUMBER_OF_PROCESSORS'])
except (ValueError, KeyError):
num = 0
elif 'bsd' in sys.platform or sys.platform == 'darwin':
comm = '/sbin/sysctl -n hw.ncpu'
if sys.platform == 'darwin':
comm = '/usr' + comm
try:
with os.popen(comm) as p:
num = int(p.read())
except ValueError:
num = 0
else:
try:
num = os.sysconf('SC_NPROCESSORS_ONLN')
except (ValueError, OSError, AttributeError):
num = 0
if num >= 1:
return num
else:
raise NotImplementedError('cannot determine number of cpus')
def freeze_support():
'''
Check whether this is a fake forked process in a frozen executable.
If so then run code specified by commandline and exit.
'''
if sys.platform == 'win32' and getattr(sys, 'frozen', False):
from .forking import freeze_support
freeze_support()
def get_logger():
'''
Return package logger -- if it does not already exist then it is created
'''
from .util import get_logger
return get_logger()
def log_to_stderr(level=None):
'''
Turn on logging and add a handler which prints to stderr
'''
from .util import log_to_stderr
return log_to_stderr(level)
def allow_connection_pickling():
'''
Install support for sending connections and sockets between processes
'''
from . import reduction # noqa
#
# Definitions depending on native semaphores
#
def Lock():
'''
Returns a non-recursive lock object
'''
from .synchronize import Lock
return Lock()
def RLock():
'''
Returns a recursive lock object
'''
from .synchronize import RLock
return RLock()
def Condition(lock=None):
'''
Returns a condition object
'''
from .synchronize import Condition
return Condition(lock)
def Semaphore(value=1):
'''
Returns a semaphore object
'''
from .synchronize import Semaphore
return Semaphore(value)
def BoundedSemaphore(value=1):
'''
Returns a bounded semaphore object
'''
from .synchronize import BoundedSemaphore
return BoundedSemaphore(value)
def Event():
'''
Returns an event object
'''
from .synchronize import Event
return Event()
def Queue(maxsize=0):
'''
Returns a queue object
'''
from .queues import Queue
return Queue(maxsize)
def JoinableQueue(maxsize=0):
'''
Returns a queue object
'''
from .queues import JoinableQueue
return JoinableQueue(maxsize)
def Pool(processes=None, initializer=None, initargs=(), maxtasksperchild=None,
timeout=None, soft_timeout=None, lost_worker_timeout=None,
max_restarts=None, max_restart_freq=1, on_process_up=None,
on_process_down=None, on_timeout_set=None, on_timeout_cancel=None,
threads=True, semaphore=None, putlocks=False, allow_restart=False):
'''
Returns a process pool object
'''
from .pool import Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
timeout, soft_timeout, lost_worker_timeout,
max_restarts, max_restart_freq, on_process_up,
on_process_down, on_timeout_set, on_timeout_cancel,
threads, semaphore, putlocks, allow_restart)
def RawValue(typecode_or_type, *args):
'''
Returns a shared object
'''
from .sharedctypes import RawValue
return RawValue(typecode_or_type, *args)
def RawArray(typecode_or_type, size_or_initializer):
'''
Returns a shared array
'''
from .sharedctypes import RawArray
return RawArray(typecode_or_type, size_or_initializer)
def Value(typecode_or_type, *args, **kwds):
'''
Returns a synchronized shared object
'''
from .sharedctypes import Value
return Value(typecode_or_type, *args, **kwds)
def Array(typecode_or_type, size_or_initializer, **kwds):
'''
Returns a synchronized shared array
'''
from .sharedctypes import Array
return Array(typecode_or_type, size_or_initializer, **kwds)
#
#
#
def set_executable(executable):
'''
Sets the path to a python.exe or pythonw.exe binary used to run
child processes on Windows instead of sys.executable.
Useful for people embedding Python.
'''
from .forking import set_executable
set_executable(executable)
def forking_is_enabled():
'''
Returns a boolean value indicating whether billiard is
currently set to create child processes by forking the current
python process rather than by starting a new instances of python.
On Windows this always returns `False`. On Unix it returns `True` by
default.
'''
from . import forking
return forking._forking_is_enabled
def forking_enable(value):
'''
Enable/disable creation of child process by forking the current process.
`value` should be a boolean value. If `value` is true then
forking is enabled. If `value` is false then forking is disabled.
On systems with `os.fork()` forking is enabled by default, and on
other systems it is always disabled.
'''
if not value:
from ._ext import supports_exec
if supports_exec:
from . import forking
if value and not hasattr(os, 'fork'):
raise ValueError('os.fork() not found')
forking._forking_is_enabled = bool(value)
if not value:
os.environ["MULTIPROCESSING_FORKING_DISABLE"] = "1"
else:
warnings.warn(RuntimeWarning(W_NO_EXECV))
if os.environ.get("MULTIPROCESSING_FORKING_DISABLE"):
forking_enable(False)

40
billiard/_ext.py Normal file
View File

@ -0,0 +1,40 @@
from __future__ import absolute_import
import sys
supports_exec = True
from .compat import _winapi as win32 # noqa
if sys.platform.startswith("java"):
_billiard = None
else:
try:
import _billiard # noqa
except ImportError:
import _multiprocessing as _billiard # noqa
supports_exec = False
try:
Connection = _billiard.Connection
except AttributeError: # Py3
from billiard.connection import Connection # noqa
PipeConnection = getattr(_billiard, "PipeConnection", None)
def ensure_multiprocessing():
if _billiard is None:
raise NotImplementedError("multiprocessing not supported")
def ensure_SemLock():
try:
from _billiard import SemLock # noqa
except ImportError:
try:
from _multiprocessing import SemLock # noqa
except ImportError:
raise ImportError("""\
This platform lacks a functioning sem_open implementation, therefore,
the required synchronization primitives needed will not function,
see issue 3770.""")

116
billiard/_win.py Normal file
View File

@ -0,0 +1,116 @@
# -*- coding: utf-8 -*-
"""
billiard._win
~~~~~~~~~~~~~
Windows utilities to terminate process groups.
"""
from __future__ import absolute_import
import os
# psutil is painfully slow in win32. So to avoid adding big
# dependencies like pywin32 a ctypes based solution is preferred
# Code based on the winappdbg project http://winappdbg.sourceforge.net/
# (BSD License)
from ctypes import (
byref, sizeof, windll,
Structure, WinError, POINTER,
c_size_t, c_char, c_void_p,
)
from ctypes.wintypes import DWORD, LONG
ERROR_NO_MORE_FILES = 18
INVALID_HANDLE_VALUE = c_void_p(-1).value
class PROCESSENTRY32(Structure):
_fields_ = [
('dwSize', DWORD),
('cntUsage', DWORD),
('th32ProcessID', DWORD),
('th32DefaultHeapID', c_size_t),
('th32ModuleID', DWORD),
('cntThreads', DWORD),
('th32ParentProcessID', DWORD),
('pcPriClassBase', LONG),
('dwFlags', DWORD),
('szExeFile', c_char * 260),
]
LPPROCESSENTRY32 = POINTER(PROCESSENTRY32)
def CreateToolhelp32Snapshot(dwFlags=2, th32ProcessID=0):
hSnapshot = windll.kernel32.CreateToolhelp32Snapshot(dwFlags,
th32ProcessID)
if hSnapshot == INVALID_HANDLE_VALUE:
raise WinError()
return hSnapshot
def Process32First(hSnapshot, pe=None):
return _Process32n(windll.kernel32.Process32First, hSnapshot, pe)
def Process32Next(hSnapshot, pe=None):
return _Process32n(windll.kernel32.Process32Next, hSnapshot, pe)
def _Process32n(fun, hSnapshot, pe=None):
if pe is None:
pe = PROCESSENTRY32()
pe.dwSize = sizeof(PROCESSENTRY32)
success = fun(hSnapshot, byref(pe))
if not success:
if windll.kernel32.GetLastError() == ERROR_NO_MORE_FILES:
return
raise WinError()
return pe
def get_all_processes_pids():
"""Return a dictionary with all processes pids as keys and their
parents as value. Ignore processes with no parents.
"""
h = CreateToolhelp32Snapshot()
parents = {}
pe = Process32First(h)
while pe:
if pe.th32ParentProcessID:
parents[pe.th32ProcessID] = pe.th32ParentProcessID
pe = Process32Next(h, pe)
return parents
def get_processtree_pids(pid, include_parent=True):
"""Return a list with all the pids of a process tree"""
parents = get_all_processes_pids()
all_pids = list(parents.keys())
pids = set([pid])
while 1:
pids_new = pids.copy()
for _pid in all_pids:
if parents[_pid] in pids:
pids_new.add(_pid)
if pids_new == pids:
break
pids = pids_new.copy()
if not include_parent:
pids.remove(pid)
return list(pids)
def kill_processtree(pid, signum):
"""Kill a process and all its descendants"""
family_pids = get_processtree_pids(pid)
for _pid in family_pids:
os.kill(_pid, signum)

134
billiard/common.py Normal file
View File

@ -0,0 +1,134 @@
# -*- coding: utf-8 -*-
"""
This module contains utilities added by billiard, to keep
"non-core" functionality out of ``.util``."""
from __future__ import absolute_import
import os
import signal
import sys
import pickle as pypickle
try:
import cPickle as cpickle
except ImportError: # pragma: no cover
cpickle = None # noqa
from .exceptions import RestartFreqExceeded
from .five import monotonic
if sys.version_info < (2, 6): # pragma: no cover
# cPickle does not use absolute_imports
pickle = pypickle
pickle_load = pypickle.load
pickle_loads = pypickle.loads
else:
pickle = cpickle or pypickle
pickle_load = pickle.load
pickle_loads = pickle.loads
# cPickle.loads does not support buffer() objects,
# but we can just create a StringIO and use load.
if sys.version_info[0] == 3:
from io import BytesIO
else:
try:
from cStringIO import StringIO as BytesIO # noqa
except ImportError:
from StringIO import StringIO as BytesIO # noqa
EX_SOFTWARE = 70
TERMSIGS_DEFAULT = (
'SIGHUP',
'SIGQUIT',
'SIGTERM',
'SIGUSR1',
'SIGUSR2'
)
TERMSIGS_FULL = (
'SIGHUP',
'SIGQUIT',
'SIGTRAP',
'SIGABRT',
'SIGEMT',
'SIGSYS',
'SIGPIPE',
'SIGALRM',
'SIGTERM',
'SIGXCPU',
'SIGXFSZ',
'SIGVTALRM',
'SIGPROF',
'SIGUSR1',
'SIGUSR2',
)
#: set by signal handlers just before calling exit.
#: if this is true after the sighandler returns it means that something
#: went wrong while terminating the process, and :func:`os._exit`
#: must be called ASAP.
_should_have_exited = [False]
def pickle_loads(s, load=pickle_load):
# used to support buffer objects
return load(BytesIO(s))
def maybe_setsignal(signum, handler):
try:
signal.signal(signum, handler)
except (OSError, AttributeError, ValueError, RuntimeError):
pass
def _shutdown_cleanup(signum, frame):
# we will exit here so if the signal is received a second time
# we can be sure that something is very wrong and we may be in
# a crashing loop.
if _should_have_exited[0]:
os._exit(EX_SOFTWARE)
maybe_setsignal(signum, signal.SIG_DFL)
_should_have_exited[0] = True
sys.exit(-(256 - signum))
def reset_signals(handler=_shutdown_cleanup, full=False):
for sig in TERMSIGS_FULL if full else TERMSIGS_DEFAULT:
try:
signum = getattr(signal, sig)
except AttributeError:
pass
else:
current = signal.getsignal(signum)
if current is not None and current != signal.SIG_IGN:
maybe_setsignal(signum, handler)
class restart_state(object):
RestartFreqExceeded = RestartFreqExceeded
def __init__(self, maxR, maxT):
self.maxR, self.maxT = maxR, maxT
self.R, self.T = 0, None
def step(self, now=None):
now = monotonic() if now is None else now
R = self.R
if self.T and now - self.T >= self.maxT:
# maxT passed, reset counter and time passed.
self.T, self.R = now, 0
elif self.maxR and self.R >= self.maxR:
# verify that R has a value as the result handler
# resets this when a job is accepted. If a job is accepted
# the startup probably went fine (startup restart burst
# protection)
if self.R: # pragma: no cover
self.R = 0 # reset in case someone catches the error
raise self.RestartFreqExceeded("%r in %rs" % (R, self.maxT))
# first run sets T
if self.T is None:
self.T = now
self.R += 1

107
billiard/compat.py Normal file
View File

@ -0,0 +1,107 @@
from __future__ import absolute_import
import errno
import os
import sys
from .five import range
if sys.platform == 'win32':
try:
import _winapi # noqa
except ImportError: # pragma: no cover
try:
from _billiard import win32 as _winapi # noqa
except (ImportError, AttributeError):
from _multiprocessing import win32 as _winapi # noqa
else:
_winapi = None # noqa
if sys.version_info > (2, 7, 5):
buf_t, is_new_buffer = memoryview, True # noqa
else:
buf_t, is_new_buffer = buffer, False # noqa
if hasattr(os, 'write'):
__write__ = os.write
if is_new_buffer:
def send_offset(fd, buf, offset):
return __write__(fd, buf[offset:])
else: # Py<2.7.6
def send_offset(fd, buf, offset): # noqa
return __write__(fd, buf_t(buf, offset))
else: # non-posix platform
def send_offset(fd, buf, offset): # noqa
raise NotImplementedError('send_offset')
if sys.version_info[0] == 3:
bytes = bytes
else:
_bytes = bytes
# the 'bytes' alias in Python2 does not support an encoding argument.
class bytes(_bytes): # noqa
def __new__(cls, *args):
if len(args) > 1:
return _bytes(args[0]).encode(*args[1:])
return _bytes(*args)
try:
closerange = os.closerange
except AttributeError:
def closerange(fd_low, fd_high): # noqa
for fd in reversed(range(fd_low, fd_high)):
try:
os.close(fd)
except OSError as exc:
if exc.errno != errno.EBADF:
raise
def get_errno(exc):
""":exc:`socket.error` and :exc:`IOError` first got
the ``.errno`` attribute in Py2.7"""
try:
return exc.errno
except AttributeError:
try:
# e.args = (errno, reason)
if isinstance(exc.args, tuple) and len(exc.args) == 2:
return exc.args[0]
except AttributeError:
pass
return 0
if sys.platform == 'win32':
def setblocking(handle, blocking):
raise NotImplementedError('setblocking not implemented on win32')
def isblocking(handle):
raise NotImplementedError('isblocking not implemented on win32')
else:
from os import O_NONBLOCK
from fcntl import fcntl, F_GETFL, F_SETFL
def isblocking(handle): # noqa
return not (fcntl(handle, F_GETFL) & O_NONBLOCK)
def setblocking(handle, blocking): # noqa
flags = fcntl(handle, F_GETFL, 0)
fcntl(
handle, F_SETFL,
flags & (~O_NONBLOCK) if blocking else flags | O_NONBLOCK,
)

27
billiard/connection.py Normal file
View File

@ -0,0 +1,27 @@
from __future__ import absolute_import
import sys
is_pypy = hasattr(sys, 'pypy_version_info')
if sys.version_info[0] == 3:
from .py3 import connection
else:
from .py2 import connection # noqa
if is_pypy:
import _multiprocessing
from .compat import setblocking, send_offset
class Connection(_multiprocessing.Connection):
def send_offset(self, buf, offset):
return send_offset(self.fileno(), buf, offset)
def setblocking(self, blocking):
setblocking(self.fileno(), blocking)
_multiprocessing.Connection = Connection
sys.modules[__name__] = connection

165
billiard/dummy/__init__.py Normal file
View File

@ -0,0 +1,165 @@
#
# Support for the API of the multiprocessing package using threads
#
# multiprocessing/dummy/__init__.py
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# 3. Neither the name of author nor the names of any contributors may be
# used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGE.
#
from __future__ import absolute_import
__all__ = [
'Process', 'current_process', 'active_children', 'freeze_support',
'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Condition',
'Event', 'Queue', 'Manager', 'Pipe', 'Pool', 'JoinableQueue'
]
#
# Imports
#
import threading
import sys
import weakref
import array
from threading import Lock, RLock, Semaphore, BoundedSemaphore
from threading import Event
from billiard.five import Queue
from billiard.connection import Pipe
class DummyProcess(threading.Thread):
def __init__(self, group=None, target=None, name=None, args=(), kwargs={}):
threading.Thread.__init__(self, group, target, name, args, kwargs)
self._pid = None
self._children = weakref.WeakKeyDictionary()
self._start_called = False
self._parent = current_process()
def start(self):
assert self._parent is current_process()
self._start_called = True
self._parent._children[self] = None
threading.Thread.start(self)
@property
def exitcode(self):
if self._start_called and not self.is_alive():
return 0
else:
return None
try:
_Condition = threading._Condition
except AttributeError: # Py3
_Condition = threading.Condition # noqa
class Condition(_Condition):
if sys.version_info[0] == 3:
notify_all = _Condition.notifyAll
else:
notify_all = _Condition.notifyAll.__func__
Process = DummyProcess
current_process = threading.currentThread
current_process()._children = weakref.WeakKeyDictionary()
def active_children():
children = current_process()._children
for p in list(children):
if not p.is_alive():
children.pop(p, None)
return list(children)
def freeze_support():
pass
class Namespace(object):
def __init__(self, **kwds):
self.__dict__.update(kwds)
def __repr__(self):
items = list(self.__dict__.items())
temp = []
for name, value in items:
if not name.startswith('_'):
temp.append('%s=%r' % (name, value))
temp.sort()
return 'Namespace(%s)' % str.join(', ', temp)
dict = dict
list = list
def Array(typecode, sequence, lock=True):
return array.array(typecode, sequence)
class Value(object):
def __init__(self, typecode, value, lock=True):
self._typecode = typecode
self._value = value
def _get(self):
return self._value
def _set(self, value):
self._value = value
value = property(_get, _set)
def __repr__(self):
return '<%r(%r, %r)>' % (type(self).__name__,
self._typecode, self._value)
def Manager():
return sys.modules[__name__]
def shutdown():
pass
def Pool(processes=None, initializer=None, initargs=()):
from billiard.pool import ThreadPool
return ThreadPool(processes, initializer, initargs)
JoinableQueue = Queue

View File

@ -0,0 +1,94 @@
#
# Analogue of `multiprocessing.connection` which uses queues instead of sockets
#
# multiprocessing/dummy/connection.py
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# 3. Neither the name of author nor the names of any contributors may be
# used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGE.
#
from __future__ import absolute_import
__all__ = ['Client', 'Listener', 'Pipe']
from billiard.five import Queue
families = [None]
class Listener(object):
def __init__(self, address=None, family=None, backlog=1):
self._backlog_queue = Queue(backlog)
def accept(self):
return Connection(*self._backlog_queue.get())
def close(self):
self._backlog_queue = None
address = property(lambda self: self._backlog_queue)
def __enter__(self):
return self
def __exit__(self, *exc_info):
self.close()
def Client(address):
_in, _out = Queue(), Queue()
address.put((_out, _in))
return Connection(_in, _out)
def Pipe(duplex=True):
a, b = Queue(), Queue()
return Connection(a, b), Connection(b, a)
class Connection(object):
def __init__(self, _in, _out):
self._out = _out
self._in = _in
self.send = self.send_bytes = _out.put
self.recv = self.recv_bytes = _in.get
def poll(self, timeout=0.0):
if self._in.qsize() > 0:
return True
if timeout <= 0.0:
return False
self._in.not_empty.acquire()
self._in.not_empty.wait(timeout)
self._in.not_empty.release()
return self._in.qsize() > 0
def close(self):
pass

112
billiard/einfo.py Normal file
View File

@ -0,0 +1,112 @@
from __future__ import absolute_import
import sys
import traceback
class _Code(object):
def __init__(self, code):
self.co_filename = code.co_filename
self.co_name = code.co_name
class _Frame(object):
Code = _Code
def __init__(self, frame):
self.f_globals = {
"__file__": frame.f_globals.get("__file__", "__main__"),
"__name__": frame.f_globals.get("__name__"),
"__loader__": None,
}
self.f_locals = fl = {}
try:
fl["__traceback_hide__"] = frame.f_locals["__traceback_hide__"]
except KeyError:
pass
self.f_code = self.Code(frame.f_code)
self.f_lineno = frame.f_lineno
class _Object(object):
def __init__(self, **kw):
[setattr(self, k, v) for k, v in kw.items()]
class _Truncated(object):
def __init__(self):
self.tb_lineno = -1
self.tb_frame = _Object(
f_globals={"__file__": "",
"__name__": "",
"__loader__": None},
f_fileno=None,
f_code=_Object(co_filename="...",
co_name="[rest of traceback truncated]"),
)
self.tb_next = None
class Traceback(object):
Frame = _Frame
tb_frame = tb_lineno = tb_next = None
max_frames = sys.getrecursionlimit() // 8
def __init__(self, tb, max_frames=None, depth=0):
limit = self.max_frames = max_frames or self.max_frames
self.tb_frame = self.Frame(tb.tb_frame)
self.tb_lineno = tb.tb_lineno
if tb.tb_next is not None:
if depth <= limit:
self.tb_next = Traceback(tb.tb_next, limit, depth + 1)
else:
self.tb_next = _Truncated()
class ExceptionInfo(object):
"""Exception wrapping an exception and its traceback.
:param exc_info: The exception info tuple as returned by
:func:`sys.exc_info`.
"""
#: Exception type.
type = None
#: Exception instance.
exception = None
#: Pickleable traceback instance for use with :mod:`traceback`
tb = None
#: String representation of the traceback.
traceback = None
#: Set to true if this is an internal error.
internal = False
def __init__(self, exc_info=None, internal=False):
self.type, self.exception, tb = exc_info or sys.exc_info()
try:
self.tb = Traceback(tb)
self.traceback = ''.join(
traceback.format_exception(self.type, self.exception, tb),
)
self.internal = internal
finally:
del(tb)
def __str__(self):
return self.traceback
def __repr__(self):
return "<ExceptionInfo: %r>" % (self.exception, )
@property
def exc_info(self):
return self.type, self.exception, self.tb

54
billiard/exceptions.py Normal file
View File

@ -0,0 +1,54 @@
from __future__ import absolute_import
try:
from multiprocessing import (
ProcessError,
BufferTooShort,
TimeoutError,
AuthenticationError,
)
except ImportError:
class ProcessError(Exception): # noqa
pass
class BufferTooShort(Exception): # noqa
pass
class TimeoutError(Exception): # noqa
pass
class AuthenticationError(Exception): # noqa
pass
class TimeLimitExceeded(Exception):
"""The time limit has been exceeded and the job has been terminated."""
def __str__(self):
return "TimeLimitExceeded%s" % (self.args, )
class SoftTimeLimitExceeded(Exception):
"""The soft time limit has been exceeded. This exception is raised
to give the task a chance to clean up."""
def __str__(self):
return "SoftTimeLimitExceeded%s" % (self.args, )
class WorkerLostError(Exception):
"""The worker processing a job has exited prematurely."""
class Terminated(Exception):
"""The worker processing a job has been terminated by user request."""
class RestartFreqExceeded(Exception):
"""Restarts too fast."""
class CoroStop(Exception):
"""Coroutine exit, as opposed to StopIteration which may
mean it should be restarted."""
pass

188
billiard/five.py Normal file
View File

@ -0,0 +1,188 @@
# -*- coding: utf-8 -*-
"""
celery.five
~~~~~~~~~~~
Compatibility implementations of features
only available in newer Python versions.
"""
from __future__ import absolute_import
# ############# py3k #########################################################
import sys
PY3 = sys.version_info[0] == 3
try:
reload = reload # noqa
except NameError: # pragma: no cover
from imp import reload # noqa
try:
from UserList import UserList # noqa
except ImportError: # pragma: no cover
from collections import UserList # noqa
try:
from UserDict import UserDict # noqa
except ImportError: # pragma: no cover
from collections import UserDict # noqa
# ############# time.monotonic ###############################################
if sys.version_info < (3, 3):
import platform
SYSTEM = platform.system()
if SYSTEM == 'Darwin':
import ctypes
from ctypes.util import find_library
libSystem = ctypes.CDLL('libSystem.dylib')
CoreServices = ctypes.CDLL(find_library('CoreServices'),
use_errno=True)
mach_absolute_time = libSystem.mach_absolute_time
mach_absolute_time.restype = ctypes.c_uint64
absolute_to_nanoseconds = CoreServices.AbsoluteToNanoseconds
absolute_to_nanoseconds.restype = ctypes.c_uint64
absolute_to_nanoseconds.argtypes = [ctypes.c_uint64]
def _monotonic():
return absolute_to_nanoseconds(mach_absolute_time()) * 1e-9
elif SYSTEM == 'Linux':
# from stackoverflow:
# questions/1205722/how-do-i-get-monotonic-time-durations-in-python
import ctypes
import os
CLOCK_MONOTONIC = 1 # see <linux/time.h>
class timespec(ctypes.Structure):
_fields_ = [
('tv_sec', ctypes.c_long),
('tv_nsec', ctypes.c_long),
]
librt = ctypes.CDLL('librt.so.1', use_errno=True)
clock_gettime = librt.clock_gettime
clock_gettime.argtypes = [
ctypes.c_int, ctypes.POINTER(timespec),
]
def _monotonic(): # noqa
t = timespec()
if clock_gettime(CLOCK_MONOTONIC, ctypes.pointer(t)) != 0:
errno_ = ctypes.get_errno()
raise OSError(errno_, os.strerror(errno_))
return t.tv_sec + t.tv_nsec * 1e-9
else:
from time import time as _monotonic
try:
from time import monotonic
except ImportError:
monotonic = _monotonic # noqa
if PY3:
import builtins
from queue import Queue, Empty, Full
from itertools import zip_longest
from io import StringIO, BytesIO
map = map
string = str
string_t = str
long_t = int
text_t = str
range = range
int_types = (int, )
open_fqdn = 'builtins.open'
def items(d):
return d.items()
def keys(d):
return d.keys()
def values(d):
return d.values()
def nextfun(it):
return it.__next__
exec_ = getattr(builtins, 'exec')
def reraise(tp, value, tb=None):
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
class WhateverIO(StringIO):
def write(self, data):
if isinstance(data, bytes):
data = data.encode()
StringIO.write(self, data)
else:
import __builtin__ as builtins # noqa
from Queue import Queue, Empty, Full # noqa
from itertools import imap as map, izip_longest as zip_longest # noqa
from StringIO import StringIO # noqa
string = unicode # noqa
string_t = basestring # noqa
text_t = unicode
long_t = long # noqa
range = xrange
int_types = (int, long)
open_fqdn = '__builtin__.open'
def items(d): # noqa
return d.iteritems()
def keys(d): # noqa
return d.iterkeys()
def values(d): # noqa
return d.itervalues()
def nextfun(it): # noqa
return it.next
def exec_(code, globs=None, locs=None):
"""Execute code in a namespace."""
if globs is None:
frame = sys._getframe(1)
globs = frame.f_globals
if locs is None:
locs = frame.f_locals
del frame
elif locs is None:
locs = globs
exec("""exec code in globs, locs""")
exec_("""def reraise(tp, value, tb=None): raise tp, value, tb""")
BytesIO = WhateverIO = StringIO # noqa
def with_metaclass(Type, skip_attrs=set(['__dict__', '__weakref__'])):
"""Class decorator to set metaclass.
Works with both Python 2 and Python 3 and it does not add
an extra class in the lookup order like ``six.with_metaclass`` does
(that is -- it copies the original class instead of using inheritance).
"""
def _clone_with_metaclass(Class):
attrs = dict((key, value) for key, value in items(vars(Class))
if key not in skip_attrs)
return Type(Class.__name__, Class.__bases__, attrs)
return _clone_with_metaclass

580
billiard/forking.py Normal file
View File

@ -0,0 +1,580 @@
#
# Module for starting a process object using os.fork() or CreateProcess()
#
# multiprocessing/forking.py
#
# Copyright (c) 2006-2008, R Oudkerk
# Licensed to PSF under a Contributor Agreement.
#
from __future__ import absolute_import
import os
import sys
import signal
import warnings
from pickle import load, HIGHEST_PROTOCOL
from billiard import util
from billiard import process
from billiard.five import int_types
from .reduction import dump
from .compat import _winapi as win32
__all__ = ['Popen', 'assert_spawning', 'exit',
'duplicate', 'close']
try:
WindowsError = WindowsError # noqa
except NameError:
class WindowsError(Exception): # noqa
pass
W_OLD_DJANGO_LAYOUT = """\
Will add directory %r to path! This is necessary to accommodate \
pre-Django 1.4 layouts using setup_environ.
You can skip this warning by adding a DJANGO_SETTINGS_MODULE=settings \
environment variable.
"""
#
# Choose whether to do a fork or spawn (fork+exec) on Unix.
# This affects how some shared resources should be created.
#
_forking_is_enabled = sys.platform != 'win32'
#
# Check that the current thread is spawning a child process
#
def assert_spawning(self):
if not Popen.thread_is_spawning():
raise RuntimeError(
'%s objects should only be shared between processes'
' through inheritance' % type(self).__name__
)
#
# Unix
#
if sys.platform != 'win32':
try:
import thread
except ImportError:
import _thread as thread # noqa
import select
WINEXE = False
WINSERVICE = False
exit = os._exit
duplicate = os.dup
close = os.close
_select = util._eintr_retry(select.select)
#
# We define a Popen class similar to the one from subprocess, but
# whose constructor takes a process object as its argument.
#
class Popen(object):
_tls = thread._local()
def __init__(self, process_obj):
# register reducers
from billiard import connection # noqa
_Django_old_layout_hack__save()
sys.stdout.flush()
sys.stderr.flush()
self.returncode = None
r, w = os.pipe()
self.sentinel = r
if _forking_is_enabled:
self.pid = os.fork()
if self.pid == 0:
os.close(r)
if 'random' in sys.modules:
import random
random.seed()
code = process_obj._bootstrap()
os._exit(code)
else:
from_parent_fd, to_child_fd = os.pipe()
cmd = get_command_line() + [str(from_parent_fd)]
self.pid = os.fork()
if self.pid == 0:
os.close(r)
os.close(to_child_fd)
os.execv(sys.executable, cmd)
# send information to child
prep_data = get_preparation_data(process_obj._name)
os.close(from_parent_fd)
to_child = os.fdopen(to_child_fd, 'wb')
Popen._tls.process_handle = self.pid
try:
dump(prep_data, to_child, HIGHEST_PROTOCOL)
dump(process_obj, to_child, HIGHEST_PROTOCOL)
finally:
del(Popen._tls.process_handle)
to_child.close()
# `w` will be closed when the child exits, at which point `r`
# will become ready for reading (using e.g. select()).
os.close(w)
util.Finalize(self, os.close, (r,))
def poll(self, flag=os.WNOHANG):
if self.returncode is None:
try:
pid, sts = os.waitpid(self.pid, flag)
except os.error:
# Child process not yet created. See #1731717
# e.errno == errno.ECHILD == 10
return None
if pid == self.pid:
if os.WIFSIGNALED(sts):
self.returncode = -os.WTERMSIG(sts)
else:
assert os.WIFEXITED(sts)
self.returncode = os.WEXITSTATUS(sts)
return self.returncode
def wait(self, timeout=None):
if self.returncode is None:
if timeout is not None:
r = _select([self.sentinel], [], [], timeout)[0]
if not r:
return None
# This shouldn't block if select() returned successfully.
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
return self.returncode
def terminate(self):
if self.returncode is None:
try:
os.kill(self.pid, signal.SIGTERM)
except OSError:
if self.wait(timeout=0.1) is None:
raise
@staticmethod
def thread_is_spawning():
if _forking_is_enabled:
return False
else:
return getattr(Popen._tls, 'process_handle', None) is not None
@staticmethod
def duplicate_for_child(handle):
return handle
#
# Windows
#
else:
try:
import thread
except ImportError:
import _thread as thread # noqa
import msvcrt
try:
import _subprocess
except ImportError:
import _winapi as _subprocess # noqa
#
#
#
TERMINATE = 0x10000
WINEXE = (sys.platform == 'win32' and getattr(sys, 'frozen', False))
WINSERVICE = sys.executable.lower().endswith("pythonservice.exe")
exit = win32.ExitProcess
close = win32.CloseHandle
#
#
#
def duplicate(handle, target_process=None, inheritable=False):
if target_process is None:
target_process = _subprocess.GetCurrentProcess()
h = _subprocess.DuplicateHandle(
_subprocess.GetCurrentProcess(), handle, target_process,
0, inheritable, _subprocess.DUPLICATE_SAME_ACCESS
)
if sys.version_info[0] < 3 or (
sys.version_info[0] == 3 and sys.version_info[1] < 3):
h = h.Detach()
return h
#
# We define a Popen class similar to the one from subprocess, but
# whose constructor takes a process object as its argument.
#
class Popen(object):
'''
Start a subprocess to run the code of a process object
'''
_tls = thread._local()
def __init__(self, process_obj):
_Django_old_layout_hack__save()
# create pipe for communication with child
rfd, wfd = os.pipe()
# get handle for read end of the pipe and make it inheritable
rhandle = duplicate(msvcrt.get_osfhandle(rfd), inheritable=True)
os.close(rfd)
# start process
cmd = get_command_line() + [rhandle]
cmd = ' '.join('"%s"' % x for x in cmd)
hp, ht, pid, tid = _subprocess.CreateProcess(
_python_exe, cmd, None, None, 1, 0, None, None, None
)
close(ht) if isinstance(ht, int_types) else ht.Close()
(close(rhandle) if isinstance(rhandle, int_types)
else rhandle.Close())
# set attributes of self
self.pid = pid
self.returncode = None
self._handle = hp
self.sentinel = int(hp)
# send information to child
prep_data = get_preparation_data(process_obj._name)
to_child = os.fdopen(wfd, 'wb')
Popen._tls.process_handle = int(hp)
try:
dump(prep_data, to_child, HIGHEST_PROTOCOL)
dump(process_obj, to_child, HIGHEST_PROTOCOL)
finally:
del Popen._tls.process_handle
to_child.close()
@staticmethod
def thread_is_spawning():
return getattr(Popen._tls, 'process_handle', None) is not None
@staticmethod
def duplicate_for_child(handle):
return duplicate(handle, Popen._tls.process_handle)
def wait(self, timeout=None):
if self.returncode is None:
if timeout is None:
msecs = _subprocess.INFINITE
else:
msecs = max(0, int(timeout * 1000 + 0.5))
res = _subprocess.WaitForSingleObject(int(self._handle), msecs)
if res == _subprocess.WAIT_OBJECT_0:
code = _subprocess.GetExitCodeProcess(self._handle)
if code == TERMINATE:
code = -signal.SIGTERM
self.returncode = code
return self.returncode
def poll(self):
return self.wait(timeout=0)
def terminate(self):
if self.returncode is None:
try:
_subprocess.TerminateProcess(int(self._handle), TERMINATE)
except WindowsError:
if self.wait(timeout=0.1) is None:
raise
#
#
#
if WINSERVICE:
_python_exe = os.path.join(sys.exec_prefix, 'python.exe')
else:
_python_exe = sys.executable
def set_executable(exe):
global _python_exe
_python_exe = exe
def is_forking(argv):
'''
Return whether commandline indicates we are forking
'''
if len(argv) >= 2 and argv[1] == '--billiard-fork':
assert len(argv) == 3
os.environ["FORKED_BY_MULTIPROCESSING"] = "1"
return True
else:
return False
def freeze_support():
'''
Run code for process object if this in not the main process
'''
if is_forking(sys.argv):
main()
sys.exit()
def get_command_line():
'''
Returns prefix of command line used for spawning a child process
'''
if process.current_process()._identity == () and is_forking(sys.argv):
raise RuntimeError('''
Attempt to start a new process before the current process
has finished its bootstrapping phase.
This probably means that have forgotten to use the proper
idiom in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.''')
if getattr(sys, 'frozen', False):
return [sys.executable, '--billiard-fork']
else:
prog = 'from billiard.forking import main; main()'
return [_python_exe, '-c', prog, '--billiard-fork']
def _Django_old_layout_hack__save():
if 'DJANGO_PROJECT_DIR' not in os.environ:
try:
settings_name = os.environ['DJANGO_SETTINGS_MODULE']
except KeyError:
return # not using Django.
conf_settings = sys.modules.get('django.conf.settings')
configured = conf_settings and conf_settings.configured
try:
project_name, _ = settings_name.split('.', 1)
except ValueError:
return # not modified by setup_environ
project = __import__(project_name)
try:
project_dir = os.path.normpath(_module_parent_dir(project))
except AttributeError:
return # dynamically generated module (no __file__)
if configured:
warnings.warn(UserWarning(
W_OLD_DJANGO_LAYOUT % os.path.realpath(project_dir)
))
os.environ['DJANGO_PROJECT_DIR'] = project_dir
def _Django_old_layout_hack__load():
try:
sys.path.append(os.environ['DJANGO_PROJECT_DIR'])
except KeyError:
pass
def _module_parent_dir(mod):
dir, filename = os.path.split(_module_dir(mod))
if dir == os.curdir or not dir:
dir = os.getcwd()
return dir
def _module_dir(mod):
if '__init__.py' in mod.__file__:
return os.path.dirname(mod.__file__)
return mod.__file__
def main():
'''
Run code specifed by data received over pipe
'''
global _forking_is_enabled
_Django_old_layout_hack__load()
assert is_forking(sys.argv)
_forking_is_enabled = False
handle = int(sys.argv[-1])
if sys.platform == 'win32':
fd = msvcrt.open_osfhandle(handle, os.O_RDONLY)
else:
fd = handle
from_parent = os.fdopen(fd, 'rb')
process.current_process()._inheriting = True
preparation_data = load(from_parent)
prepare(preparation_data)
# Huge hack to make logging before Process.run work.
try:
os.environ["MP_MAIN_FILE"] = sys.modules["__main__"].__file__
except KeyError:
pass
except AttributeError:
pass
loglevel = os.environ.get("_MP_FORK_LOGLEVEL_")
logfile = os.environ.get("_MP_FORK_LOGFILE_") or None
format = os.environ.get("_MP_FORK_LOGFORMAT_")
if loglevel:
from billiard import util
import logging
logger = util.get_logger()
logger.setLevel(int(loglevel))
if not logger.handlers:
logger._rudimentary_setup = True
logfile = logfile or sys.__stderr__
if hasattr(logfile, "write"):
handler = logging.StreamHandler(logfile)
else:
handler = logging.FileHandler(logfile)
formatter = logging.Formatter(
format or util.DEFAULT_LOGGING_FORMAT,
)
handler.setFormatter(formatter)
logger.addHandler(handler)
self = load(from_parent)
process.current_process()._inheriting = False
from_parent.close()
exitcode = self._bootstrap()
exit(exitcode)
def get_preparation_data(name):
'''
Return info about parent needed by child to unpickle process object
'''
from billiard.util import _logger, _log_to_stderr
d = dict(
name=name,
sys_path=sys.path,
sys_argv=sys.argv,
log_to_stderr=_log_to_stderr,
orig_dir=process.ORIGINAL_DIR,
authkey=process.current_process().authkey,
)
if _logger is not None:
d['log_level'] = _logger.getEffectiveLevel()
if not WINEXE and not WINSERVICE:
main_path = getattr(sys.modules['__main__'], '__file__', None)
if not main_path and sys.argv[0] not in ('', '-c'):
main_path = sys.argv[0]
if main_path is not None:
if (not os.path.isabs(main_path) and
process.ORIGINAL_DIR is not None):
main_path = os.path.join(process.ORIGINAL_DIR, main_path)
d['main_path'] = os.path.normpath(main_path)
return d
#
# Prepare current process
#
old_main_modules = []
def prepare(data):
'''
Try to get current process ready to unpickle process object
'''
old_main_modules.append(sys.modules['__main__'])
if 'name' in data:
process.current_process().name = data['name']
if 'authkey' in data:
process.current_process()._authkey = data['authkey']
if 'log_to_stderr' in data and data['log_to_stderr']:
util.log_to_stderr()
if 'log_level' in data:
util.get_logger().setLevel(data['log_level'])
if 'sys_path' in data:
sys.path = data['sys_path']
if 'sys_argv' in data:
sys.argv = data['sys_argv']
if 'dir' in data:
os.chdir(data['dir'])
if 'orig_dir' in data:
process.ORIGINAL_DIR = data['orig_dir']
if 'main_path' in data:
main_path = data['main_path']
main_name = os.path.splitext(os.path.basename(main_path))[0]
if main_name == '__init__':
main_name = os.path.basename(os.path.dirname(main_path))
if main_name == '__main__':
main_module = sys.modules['__main__']
main_module.__file__ = main_path
elif main_name != 'ipython':
# Main modules not actually called __main__.py may
# contain additional code that should still be executed
import imp
if main_path is None:
dirs = None
elif os.path.basename(main_path).startswith('__init__.py'):
dirs = [os.path.dirname(os.path.dirname(main_path))]
else:
dirs = [os.path.dirname(main_path)]
assert main_name not in sys.modules, main_name
file, path_name, etc = imp.find_module(main_name, dirs)
try:
# We would like to do "imp.load_module('__main__', ...)"
# here. However, that would cause 'if __name__ ==
# "__main__"' clauses to be executed.
main_module = imp.load_module(
'__parents_main__', file, path_name, etc
)
finally:
if file:
file.close()
sys.modules['__main__'] = main_module
main_module.__name__ = '__main__'
# Try to make the potentially picklable objects in
# sys.modules['__main__'] realize they are in the main
# module -- somewhat ugly.
for obj in list(main_module.__dict__.values()):
try:
if obj.__module__ == '__parents_main__':
obj.__module__ = '__main__'
except Exception:
pass

255
billiard/heap.py Normal file
View File

@ -0,0 +1,255 @@
#
# Module which supports allocation of memory from an mmap
#
# multiprocessing/heap.py
#
# Copyright (c) 2006-2008, R Oudkerk
# Licensed to PSF under a Contributor Agreement.
#
from __future__ import absolute_import
import bisect
import mmap
import os
import sys
import threading
import itertools
from ._ext import _billiard, win32
from .util import Finalize, info, get_temp_dir
from .forking import assert_spawning
from .reduction import ForkingPickler
__all__ = ['BufferWrapper']
try:
maxsize = sys.maxsize
except AttributeError:
maxsize = sys.maxint
#
# Inheirtable class which wraps an mmap, and from which blocks can be allocated
#
if sys.platform == 'win32':
class Arena(object):
_counter = itertools.count()
def __init__(self, size):
self.size = size
self.name = 'pym-%d-%d' % (os.getpid(), next(Arena._counter))
self.buffer = mmap.mmap(-1, self.size, tagname=self.name)
assert win32.GetLastError() == 0, 'tagname already in use'
self._state = (self.size, self.name)
def __getstate__(self):
assert_spawning(self)
return self._state
def __setstate__(self, state):
self.size, self.name = self._state = state
self.buffer = mmap.mmap(-1, self.size, tagname=self.name)
assert win32.GetLastError() == win32.ERROR_ALREADY_EXISTS
else:
class Arena(object):
_counter = itertools.count()
def __init__(self, size, fileno=-1):
from .forking import _forking_is_enabled
self.size = size
self.fileno = fileno
if fileno == -1 and not _forking_is_enabled:
name = os.path.join(
get_temp_dir(),
'pym-%d-%d' % (os.getpid(), next(self._counter)))
self.fileno = os.open(
name, os.O_RDWR | os.O_CREAT | os.O_EXCL, 0o600)
os.unlink(name)
os.ftruncate(self.fileno, size)
self.buffer = mmap.mmap(self.fileno, self.size)
def reduce_arena(a):
if a.fileno == -1:
raise ValueError('Arena is unpicklable because'
'forking was enabled when it was created')
return Arena, (a.size, a.fileno)
ForkingPickler.register(Arena, reduce_arena)
#
# Class allowing allocation of chunks of memory from arenas
#
class Heap(object):
_alignment = 8
def __init__(self, size=mmap.PAGESIZE):
self._lastpid = os.getpid()
self._lock = threading.Lock()
self._size = size
self._lengths = []
self._len_to_seq = {}
self._start_to_block = {}
self._stop_to_block = {}
self._allocated_blocks = set()
self._arenas = []
# list of pending blocks to free - see free() comment below
self._pending_free_blocks = []
@staticmethod
def _roundup(n, alignment):
# alignment must be a power of 2
mask = alignment - 1
return (n + mask) & ~mask
def _malloc(self, size):
# returns a large enough block -- it might be much larger
i = bisect.bisect_left(self._lengths, size)
if i == len(self._lengths):
length = self._roundup(max(self._size, size), mmap.PAGESIZE)
self._size *= 2
info('allocating a new mmap of length %d', length)
arena = Arena(length)
self._arenas.append(arena)
return (arena, 0, length)
else:
length = self._lengths[i]
seq = self._len_to_seq[length]
block = seq.pop()
if not seq:
del self._len_to_seq[length], self._lengths[i]
(arena, start, stop) = block
del self._start_to_block[(arena, start)]
del self._stop_to_block[(arena, stop)]
return block
def _free(self, block):
# free location and try to merge with neighbours
(arena, start, stop) = block
try:
prev_block = self._stop_to_block[(arena, start)]
except KeyError:
pass
else:
start, _ = self._absorb(prev_block)
try:
next_block = self._start_to_block[(arena, stop)]
except KeyError:
pass
else:
_, stop = self._absorb(next_block)
block = (arena, start, stop)
length = stop - start
try:
self._len_to_seq[length].append(block)
except KeyError:
self._len_to_seq[length] = [block]
bisect.insort(self._lengths, length)
self._start_to_block[(arena, start)] = block
self._stop_to_block[(arena, stop)] = block
def _absorb(self, block):
# deregister this block so it can be merged with a neighbour
(arena, start, stop) = block
del self._start_to_block[(arena, start)]
del self._stop_to_block[(arena, stop)]
length = stop - start
seq = self._len_to_seq[length]
seq.remove(block)
if not seq:
del self._len_to_seq[length]
self._lengths.remove(length)
return start, stop
def _free_pending_blocks(self):
# Free all the blocks in the pending list - called with the lock held
while 1:
try:
block = self._pending_free_blocks.pop()
except IndexError:
break
self._allocated_blocks.remove(block)
self._free(block)
def free(self, block):
# free a block returned by malloc()
# Since free() can be called asynchronously by the GC, it could happen
# that it's called while self._lock is held: in that case,
# self._lock.acquire() would deadlock (issue #12352). To avoid that, a
# trylock is used instead, and if the lock can't be acquired
# immediately, the block is added to a list of blocks to be freed
# synchronously sometimes later from malloc() or free(), by calling
# _free_pending_blocks() (appending and retrieving from a list is not
# strictly thread-safe but under cPython it's atomic thanks
# to the GIL).
assert os.getpid() == self._lastpid
if not self._lock.acquire(False):
# can't aquire the lock right now, add the block to the list of
# pending blocks to free
self._pending_free_blocks.append(block)
else:
# we hold the lock
try:
self._free_pending_blocks()
self._allocated_blocks.remove(block)
self._free(block)
finally:
self._lock.release()
def malloc(self, size):
# return a block of right size (possibly rounded up)
assert 0 <= size < maxsize
if os.getpid() != self._lastpid:
self.__init__() # reinitialize after fork
self._lock.acquire()
self._free_pending_blocks()
try:
size = self._roundup(max(size, 1), self._alignment)
(arena, start, stop) = self._malloc(size)
new_stop = start + size
if new_stop < stop:
self._free((arena, new_stop, stop))
block = (arena, start, new_stop)
self._allocated_blocks.add(block)
return block
finally:
self._lock.release()
#
# Class representing a chunk of an mmap -- can be inherited
#
class BufferWrapper(object):
_heap = Heap()
def __init__(self, size):
assert 0 <= size < maxsize
block = BufferWrapper._heap.malloc(size)
self._state = (block, size)
Finalize(self, BufferWrapper._heap.free, args=(block,))
def get_address(self):
(arena, start, stop), size = self._state
address, length = _billiard.address_of_buffer(arena.buffer)
assert size <= length
return address + start
def get_size(self):
return self._state[1]

1169
billiard/managers.py Normal file

File diff suppressed because it is too large Load Diff

1947
billiard/pool.py Normal file

File diff suppressed because it is too large Load Diff

368
billiard/process.py Normal file
View File

@ -0,0 +1,368 @@
#
# Module providing the `Process` class which emulates `threading.Thread`
#
# multiprocessing/process.py
#
# Copyright (c) 2006-2008, R Oudkerk
# Licensed to PSF under a Contributor Agreement.
#
from __future__ import absolute_import
__all__ = ['Process', 'current_process', 'active_children']
#
# Imports
#
import os
import sys
import signal
import itertools
import binascii
import logging
import threading
from multiprocessing import process as _mproc
from .compat import bytes
try:
from _weakrefset import WeakSet
except ImportError:
WeakSet = None # noqa
from .five import items, string_t
try:
ORIGINAL_DIR = os.path.abspath(os.getcwd())
except OSError:
ORIGINAL_DIR = None
#
# Public functions
#
def current_process():
'''
Return process object representing the current process
'''
return _current_process
def _set_current_process(process):
global _current_process
_current_process = _mproc._current_process = process
def _cleanup():
# check for processes which have finished
if _current_process is not None:
for p in list(_current_process._children):
if p._popen.poll() is not None:
_current_process._children.discard(p)
def _maybe_flush(f):
try:
f.flush()
except (AttributeError, EnvironmentError, NotImplementedError):
pass
def active_children(_cleanup=_cleanup):
'''
Return list of process objects corresponding to live child processes
'''
try:
_cleanup()
except TypeError:
# called after gc collect so _cleanup does not exist anymore
return []
if _current_process is not None:
return list(_current_process._children)
return []
class Process(object):
'''
Process objects represent activity that is run in a separate process
The class is analagous to `threading.Thread`
'''
_Popen = None
def __init__(self, group=None, target=None, name=None,
args=(), kwargs={}, daemon=None, **_kw):
assert group is None, 'group argument must be None for now'
count = next(_current_process._counter)
self._identity = _current_process._identity + (count,)
self._authkey = _current_process._authkey
if daemon is not None:
self._daemonic = daemon
else:
self._daemonic = _current_process._daemonic
self._tempdir = _current_process._tempdir
self._semprefix = _current_process._semprefix
self._unlinkfd = _current_process._unlinkfd
self._parent_pid = os.getpid()
self._popen = None
self._target = target
self._args = tuple(args)
self._kwargs = dict(kwargs)
self._name = (
name or type(self).__name__ + '-' +
':'.join(str(i) for i in self._identity)
)
if _dangling is not None:
_dangling.add(self)
def run(self):
'''
Method to be run in sub-process; can be overridden in sub-class
'''
if self._target:
self._target(*self._args, **self._kwargs)
def start(self):
'''
Start child process
'''
assert self._popen is None, 'cannot start a process twice'
assert self._parent_pid == os.getpid(), \
'can only start a process object created by current process'
_cleanup()
if self._Popen is not None:
Popen = self._Popen
else:
from .forking import Popen
self._popen = Popen(self)
self._sentinel = self._popen.sentinel
_current_process._children.add(self)
def terminate(self):
'''
Terminate process; sends SIGTERM signal or uses TerminateProcess()
'''
self._popen.terminate()
def join(self, timeout=None):
'''
Wait until child process terminates
'''
assert self._parent_pid == os.getpid(), 'can only join a child process'
assert self._popen is not None, 'can only join a started process'
res = self._popen.wait(timeout)
if res is not None:
_current_process._children.discard(self)
def is_alive(self):
'''
Return whether process is alive
'''
if self is _current_process:
return True
assert self._parent_pid == os.getpid(), 'can only test a child process'
if self._popen is None:
return False
self._popen.poll()
return self._popen.returncode is None
def _is_alive(self):
if self._popen is None:
return False
return self._popen.poll() is None
def _get_name(self):
return self._name
def _set_name(self, value):
assert isinstance(name, string_t), 'name must be a string'
self._name = value
name = property(_get_name, _set_name)
def _get_daemon(self):
return self._daemonic
def _set_daemon(self, daemonic):
assert self._popen is None, 'process has already started'
self._daemonic = daemonic
daemon = property(_get_daemon, _set_daemon)
def _get_authkey(self):
return self._authkey
def _set_authkey(self, authkey):
self._authkey = AuthenticationString(authkey)
authkey = property(_get_authkey, _set_authkey)
@property
def exitcode(self):
'''
Return exit code of process or `None` if it has yet to stop
'''
if self._popen is None:
return self._popen
return self._popen.poll()
@property
def ident(self):
'''
Return identifier (PID) of process or `None` if it has yet to start
'''
if self is _current_process:
return os.getpid()
else:
return self._popen and self._popen.pid
pid = ident
@property
def sentinel(self):
'''
Return a file descriptor (Unix) or handle (Windows) suitable for
waiting for process termination.
'''
try:
return self._sentinel
except AttributeError:
raise ValueError("process not started")
def __repr__(self):
if self is _current_process:
status = 'started'
elif self._parent_pid != os.getpid():
status = 'unknown'
elif self._popen is None:
status = 'initial'
else:
if self._popen.poll() is not None:
status = self.exitcode
else:
status = 'started'
if type(status) is int:
if status == 0:
status = 'stopped'
else:
status = 'stopped[%s]' % _exitcode_to_name.get(status, status)
return '<%s(%s, %s%s)>' % (type(self).__name__, self._name,
status, self._daemonic and ' daemon' or '')
##
def _bootstrap(self):
from . import util
global _current_process
try:
self._children = set()
self._counter = itertools.count(1)
if sys.stdin is not None:
try:
sys.stdin.close()
sys.stdin = open(os.devnull)
except (OSError, ValueError):
pass
old_process = _current_process
_set_current_process(self)
# Re-init logging system.
# Workaround for http://bugs.python.org/issue6721/#msg140215
# Python logging module uses RLock() objects which are broken
# after fork. This can result in a deadlock (Celery Issue #496).
loggerDict = logging.Logger.manager.loggerDict
logger_names = list(loggerDict.keys())
logger_names.append(None) # for root logger
for name in logger_names:
if not name or not isinstance(loggerDict[name],
logging.PlaceHolder):
for handler in logging.getLogger(name).handlers:
handler.createLock()
logging._lock = threading.RLock()
try:
util._finalizer_registry.clear()
util._run_after_forkers()
finally:
# delay finalization of the old process object until after
# _run_after_forkers() is executed
del old_process
util.info('child process %s calling self.run()', self.pid)
try:
self.run()
exitcode = 0
finally:
util._exit_function()
except SystemExit as exc:
if not exc.args:
exitcode = 1
elif isinstance(exc.args[0], int):
exitcode = exc.args[0]
else:
sys.stderr.write(str(exc.args[0]) + '\n')
_maybe_flush(sys.stderr)
exitcode = 0 if isinstance(exc.args[0], str) else 1
except:
exitcode = 1
if not util.error('Process %s', self.name, exc_info=True):
import traceback
sys.stderr.write('Process %s:\n' % self.name)
traceback.print_exc()
finally:
util.info('process %s exiting with exitcode %d',
self.pid, exitcode)
_maybe_flush(sys.stdout)
_maybe_flush(sys.stderr)
return exitcode
#
# We subclass bytes to avoid accidental transmission of auth keys over network
#
class AuthenticationString(bytes):
def __reduce__(self):
from .forking import Popen
if not Popen.thread_is_spawning():
raise TypeError(
'Pickling an AuthenticationString object is '
'disallowed for security reasons')
return AuthenticationString, (bytes(self),)
#
# Create object representing the main process
#
class _MainProcess(Process):
def __init__(self):
self._identity = ()
self._daemonic = False
self._name = 'MainProcess'
self._parent_pid = None
self._popen = None
self._counter = itertools.count(1)
self._children = set()
self._authkey = AuthenticationString(os.urandom(32))
self._tempdir = None
self._semprefix = 'mp-' + binascii.hexlify(
os.urandom(4)).decode('ascii')
self._unlinkfd = None
_current_process = _MainProcess()
del _MainProcess
#
# Give names to some return codes
#
_exitcode_to_name = {}
for name, signum in items(signal.__dict__):
if name[:3] == 'SIG' and '_' not in name:
_exitcode_to_name[-signum] = name
_dangling = WeakSet() if WeakSet is not None else None

0
billiard/py2/__init__.py Normal file
View File

499
billiard/py2/connection.py Normal file
View File

@ -0,0 +1,499 @@
#
# A higher level module for using sockets (or Windows named pipes)
#
# multiprocessing/connection.py
#
# Copyright (c) 2006-2008, R Oudkerk
# Licensed to PSF under a Contributor Agreement.
#
from __future__ import absolute_import
__all__ = ['Client', 'Listener', 'Pipe']
import os
import random
import sys
import socket
import string
import errno
import time
import tempfile
import itertools
from .. import AuthenticationError
from .. import reduction
from .._ext import _billiard, win32
from ..compat import get_errno, setblocking, bytes as cbytes
from ..five import monotonic
from ..forking import duplicate, close
from ..reduction import ForkingPickler
from ..util import get_temp_dir, Finalize, sub_debug, debug
try:
WindowsError = WindowsError # noqa
except NameError:
WindowsError = None # noqa
# global set later
xmlrpclib = None
Connection = getattr(_billiard, 'Connection', None)
PipeConnection = getattr(_billiard, 'PipeConnection', None)
#
#
#
BUFSIZE = 8192
# A very generous timeout when it comes to local connections...
CONNECTION_TIMEOUT = 20.
_mmap_counter = itertools.count()
default_family = 'AF_INET'
families = ['AF_INET']
if hasattr(socket, 'AF_UNIX'):
default_family = 'AF_UNIX'
families += ['AF_UNIX']
if sys.platform == 'win32':
default_family = 'AF_PIPE'
families += ['AF_PIPE']
def _init_timeout(timeout=CONNECTION_TIMEOUT):
return monotonic() + timeout
def _check_timeout(t):
return monotonic() > t
#
#
#
def arbitrary_address(family):
'''
Return an arbitrary free address for the given family
'''
if family == 'AF_INET':
return ('localhost', 0)
elif family == 'AF_UNIX':
return tempfile.mktemp(prefix='listener-', dir=get_temp_dir())
elif family == 'AF_PIPE':
randomchars = ''.join(
random.choice(string.ascii_lowercase + string.digits)
for i in range(6)
)
return r'\\.\pipe\pyc-%d-%d-%s' % (
os.getpid(), next(_mmap_counter), randomchars
)
else:
raise ValueError('unrecognized family')
def address_type(address):
'''
Return the types of the address
This can be 'AF_INET', 'AF_UNIX', or 'AF_PIPE'
'''
if type(address) == tuple:
return 'AF_INET'
elif type(address) is str and address.startswith('\\\\'):
return 'AF_PIPE'
elif type(address) is str:
return 'AF_UNIX'
else:
raise ValueError('address type of %r unrecognized' % address)
#
# Public functions
#
class Listener(object):
'''
Returns a listener object.
This is a wrapper for a bound socket which is 'listening' for
connections, or for a Windows named pipe.
'''
def __init__(self, address=None, family=None, backlog=1, authkey=None):
family = (family or
(address and address_type(address)) or
default_family)
address = address or arbitrary_address(family)
if family == 'AF_PIPE':
self._listener = PipeListener(address, backlog)
else:
self._listener = SocketListener(address, family, backlog)
if authkey is not None and not isinstance(authkey, bytes):
raise TypeError('authkey should be a byte string')
self._authkey = authkey
def accept(self):
'''
Accept a connection on the bound socket or named pipe of `self`.
Returns a `Connection` object.
'''
if self._listener is None:
raise IOError('listener is closed')
c = self._listener.accept()
if self._authkey:
deliver_challenge(c, self._authkey)
answer_challenge(c, self._authkey)
return c
def close(self):
'''
Close the bound socket or named pipe of `self`.
'''
if self._listener is not None:
self._listener.close()
self._listener = None
address = property(lambda self: self._listener._address)
last_accepted = property(lambda self: self._listener._last_accepted)
def __enter__(self):
return self
def __exit__(self, *exc_args):
self.close()
def Client(address, family=None, authkey=None):
'''
Returns a connection to the address of a `Listener`
'''
family = family or address_type(address)
if family == 'AF_PIPE':
c = PipeClient(address)
else:
c = SocketClient(address)
if authkey is not None and not isinstance(authkey, bytes):
raise TypeError('authkey should be a byte string')
if authkey is not None:
answer_challenge(c, authkey)
deliver_challenge(c, authkey)
return c
if sys.platform != 'win32':
def Pipe(duplex=True, rnonblock=False, wnonblock=False):
'''
Returns pair of connection objects at either end of a pipe
'''
if duplex:
s1, s2 = socket.socketpair()
s1.setblocking(not rnonblock)
s2.setblocking(not wnonblock)
c1 = Connection(os.dup(s1.fileno()))
c2 = Connection(os.dup(s2.fileno()))
s1.close()
s2.close()
else:
fd1, fd2 = os.pipe()
if rnonblock:
setblocking(fd1, 0)
if wnonblock:
setblocking(fd2, 0)
c1 = Connection(fd1, writable=False)
c2 = Connection(fd2, readable=False)
return c1, c2
else:
def Pipe(duplex=True, rnonblock=False, wnonblock=False): # noqa
'''
Returns pair of connection objects at either end of a pipe
'''
address = arbitrary_address('AF_PIPE')
if duplex:
openmode = win32.PIPE_ACCESS_DUPLEX
access = win32.GENERIC_READ | win32.GENERIC_WRITE
obsize, ibsize = BUFSIZE, BUFSIZE
else:
openmode = win32.PIPE_ACCESS_INBOUND
access = win32.GENERIC_WRITE
obsize, ibsize = 0, BUFSIZE
h1 = win32.CreateNamedPipe(
address, openmode,
win32.PIPE_TYPE_MESSAGE | win32.PIPE_READMODE_MESSAGE |
win32.PIPE_WAIT,
1, obsize, ibsize, win32.NMPWAIT_WAIT_FOREVER, win32.NULL
)
h2 = win32.CreateFile(
address, access, 0, win32.NULL, win32.OPEN_EXISTING, 0, win32.NULL
)
win32.SetNamedPipeHandleState(
h2, win32.PIPE_READMODE_MESSAGE, None, None
)
try:
win32.ConnectNamedPipe(h1, win32.NULL)
except WindowsError as exc:
if exc.args[0] != win32.ERROR_PIPE_CONNECTED:
raise
c1 = PipeConnection(h1, writable=duplex)
c2 = PipeConnection(h2, readable=duplex)
return c1, c2
#
# Definitions for connections based on sockets
#
class SocketListener(object):
'''
Representation of a socket which is bound to an address and listening
'''
def __init__(self, address, family, backlog=1):
self._socket = socket.socket(getattr(socket, family))
try:
# SO_REUSEADDR has different semantics on Windows (Issue #2550).
if os.name == 'posix':
self._socket.setsockopt(socket.SOL_SOCKET,
socket.SO_REUSEADDR, 1)
self._socket.bind(address)
self._socket.listen(backlog)
self._address = self._socket.getsockname()
except OSError:
self._socket.close()
raise
self._family = family
self._last_accepted = None
if family == 'AF_UNIX':
self._unlink = Finalize(
self, os.unlink, args=(address,), exitpriority=0
)
else:
self._unlink = None
def accept(self):
s, self._last_accepted = self._socket.accept()
fd = duplicate(s.fileno())
conn = Connection(fd)
s.close()
return conn
def close(self):
self._socket.close()
if self._unlink is not None:
self._unlink()
def SocketClient(address):
'''
Return a connection object connected to the socket given by `address`
'''
family = address_type(address)
s = socket.socket(getattr(socket, family))
t = _init_timeout()
while 1:
try:
s.connect(address)
except socket.error as exc:
if get_errno(exc) != errno.ECONNREFUSED or _check_timeout(t):
debug('failed to connect to address %s', address)
raise
time.sleep(0.01)
else:
break
else:
raise
fd = duplicate(s.fileno())
conn = Connection(fd)
s.close()
return conn
#
# Definitions for connections based on named pipes
#
if sys.platform == 'win32':
class PipeListener(object):
'''
Representation of a named pipe
'''
def __init__(self, address, backlog=None):
self._address = address
handle = win32.CreateNamedPipe(
address, win32.PIPE_ACCESS_DUPLEX,
win32.PIPE_TYPE_MESSAGE | win32.PIPE_READMODE_MESSAGE |
win32.PIPE_WAIT,
win32.PIPE_UNLIMITED_INSTANCES, BUFSIZE, BUFSIZE,
win32.NMPWAIT_WAIT_FOREVER, win32.NULL
)
self._handle_queue = [handle]
self._last_accepted = None
sub_debug('listener created with address=%r', self._address)
self.close = Finalize(
self, PipeListener._finalize_pipe_listener,
args=(self._handle_queue, self._address), exitpriority=0
)
def accept(self):
newhandle = win32.CreateNamedPipe(
self._address, win32.PIPE_ACCESS_DUPLEX,
win32.PIPE_TYPE_MESSAGE | win32.PIPE_READMODE_MESSAGE |
win32.PIPE_WAIT,
win32.PIPE_UNLIMITED_INSTANCES, BUFSIZE, BUFSIZE,
win32.NMPWAIT_WAIT_FOREVER, win32.NULL
)
self._handle_queue.append(newhandle)
handle = self._handle_queue.pop(0)
try:
win32.ConnectNamedPipe(handle, win32.NULL)
except WindowsError as exc:
if exc.args[0] != win32.ERROR_PIPE_CONNECTED:
raise
return PipeConnection(handle)
@staticmethod
def _finalize_pipe_listener(queue, address):
sub_debug('closing listener with address=%r', address)
for handle in queue:
close(handle)
def PipeClient(address):
'''
Return a connection object connected to the pipe given by `address`
'''
t = _init_timeout()
while 1:
try:
win32.WaitNamedPipe(address, 1000)
h = win32.CreateFile(
address, win32.GENERIC_READ | win32.GENERIC_WRITE,
0, win32.NULL, win32.OPEN_EXISTING, 0, win32.NULL,
)
except WindowsError as exc:
if exc.args[0] not in (
win32.ERROR_SEM_TIMEOUT,
win32.ERROR_PIPE_BUSY) or _check_timeout(t):
raise
else:
break
else:
raise
win32.SetNamedPipeHandleState(
h, win32.PIPE_READMODE_MESSAGE, None, None
)
return PipeConnection(h)
#
# Authentication stuff
#
MESSAGE_LENGTH = 20
CHALLENGE = cbytes('#CHALLENGE#', 'ascii')
WELCOME = cbytes('#WELCOME#', 'ascii')
FAILURE = cbytes('#FAILURE#', 'ascii')
def deliver_challenge(connection, authkey):
import hmac
assert isinstance(authkey, bytes)
message = os.urandom(MESSAGE_LENGTH)
connection.send_bytes(CHALLENGE + message)
digest = hmac.new(authkey, message).digest()
response = connection.recv_bytes(256) # reject large message
if response == digest:
connection.send_bytes(WELCOME)
else:
connection.send_bytes(FAILURE)
raise AuthenticationError('digest received was wrong')
def answer_challenge(connection, authkey):
import hmac
assert isinstance(authkey, bytes)
message = connection.recv_bytes(256) # reject large message
assert message[:len(CHALLENGE)] == CHALLENGE, 'message = %r' % message
message = message[len(CHALLENGE):]
digest = hmac.new(authkey, message).digest()
connection.send_bytes(digest)
response = connection.recv_bytes(256) # reject large message
if response != WELCOME:
raise AuthenticationError('digest sent was rejected')
#
# Support for using xmlrpclib for serialization
#
class ConnectionWrapper(object):
def __init__(self, conn, dumps, loads):
self._conn = conn
self._dumps = dumps
self._loads = loads
for attr in ('fileno', 'close', 'poll', 'recv_bytes', 'send_bytes'):
obj = getattr(conn, attr)
setattr(self, attr, obj)
def send(self, obj):
s = self._dumps(obj)
self._conn.send_bytes(s)
def recv(self):
s = self._conn.recv_bytes()
return self._loads(s)
def _xml_dumps(obj):
return xmlrpclib.dumps((obj,), None, None, None, 1).encode('utf8')
def _xml_loads(s):
(obj,), method = xmlrpclib.loads(s.decode('utf8'))
return obj
class XmlListener(Listener):
def accept(self):
global xmlrpclib
import xmlrpclib # noqa
obj = Listener.accept(self)
return ConnectionWrapper(obj, _xml_dumps, _xml_loads)
def XmlClient(*args, **kwds):
global xmlrpclib
import xmlrpclib # noqa
return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads)
if sys.platform == 'win32':
ForkingPickler.register(socket.socket, reduction.reduce_socket)
ForkingPickler.register(Connection, reduction.reduce_connection)
ForkingPickler.register(PipeConnection, reduction.reduce_pipe_connection)
else:
ForkingPickler.register(socket.socket, reduction.reduce_socket)
ForkingPickler.register(Connection, reduction.reduce_connection)

248
billiard/py2/reduction.py Normal file
View File

@ -0,0 +1,248 @@
#
# Module to allow connection and socket objects to be transferred
# between processes
#
# multiprocessing/reduction.py
#
# Copyright (c) 2006-2008, R Oudkerk
# Licensed to PSF under a Contributor Agreement.
#
from __future__ import absolute_import
__all__ = []
import os
import sys
import socket
import threading
from pickle import Pickler
from .. import current_process
from .._ext import _billiard, win32
from ..util import register_after_fork, debug, sub_debug
is_win32 = sys.platform == 'win32'
is_pypy = hasattr(sys, 'pypy_version_info')
is_py3k = sys.version_info[0] == 3
if not(is_win32 or is_pypy or is_py3k or hasattr(_billiard, 'recvfd')):
raise ImportError('pickling of connections not supported')
close = win32.CloseHandle if sys.platform == 'win32' else os.close
# globals set later
_listener = None
_lock = None
_cache = set()
#
# ForkingPickler
#
class ForkingPickler(Pickler): # noqa
dispatch = Pickler.dispatch.copy()
@classmethod
def register(cls, type, reduce):
def dispatcher(self, obj):
rv = reduce(obj)
self.save_reduce(obj=obj, *rv)
cls.dispatch[type] = dispatcher
def _reduce_method(m): # noqa
if m.__self__ is None:
return getattr, (m.__self__.__class__, m.__func__.__name__)
else:
return getattr, (m.__self__, m.__func__.__name__)
ForkingPickler.register(type(ForkingPickler.save), _reduce_method)
def _reduce_method_descriptor(m):
return getattr, (m.__objclass__, m.__name__)
ForkingPickler.register(type(list.append), _reduce_method_descriptor)
ForkingPickler.register(type(int.__add__), _reduce_method_descriptor)
try:
from functools import partial
except ImportError:
pass
else:
def _reduce_partial(p):
return _rebuild_partial, (p.func, p.args, p.keywords or {})
def _rebuild_partial(func, args, keywords):
return partial(func, *args, **keywords)
ForkingPickler.register(partial, _reduce_partial)
def dump(obj, file, protocol=None):
ForkingPickler(file, protocol).dump(obj)
#
# Platform specific definitions
#
if sys.platform == 'win32':
# XXX Should this subprocess import be here?
import _subprocess # noqa
def send_handle(conn, handle, destination_pid):
from ..forking import duplicate
process_handle = win32.OpenProcess(
win32.PROCESS_ALL_ACCESS, False, destination_pid
)
try:
new_handle = duplicate(handle, process_handle)
conn.send(new_handle)
finally:
close(process_handle)
def recv_handle(conn):
return conn.recv()
else:
def send_handle(conn, handle, destination_pid): # noqa
_billiard.sendfd(conn.fileno(), handle)
def recv_handle(conn): # noqa
return _billiard.recvfd(conn.fileno())
#
# Support for a per-process server thread which caches pickled handles
#
def _reset(obj):
global _lock, _listener, _cache
for h in _cache:
close(h)
_cache.clear()
_lock = threading.Lock()
_listener = None
_reset(None)
register_after_fork(_reset, _reset)
def _get_listener():
global _listener
if _listener is None:
_lock.acquire()
try:
if _listener is None:
from ..connection import Listener
debug('starting listener and thread for sending handles')
_listener = Listener(authkey=current_process().authkey)
t = threading.Thread(target=_serve)
t.daemon = True
t.start()
finally:
_lock.release()
return _listener
def _serve():
from ..util import is_exiting, sub_warning
while 1:
try:
conn = _listener.accept()
handle_wanted, destination_pid = conn.recv()
_cache.remove(handle_wanted)
send_handle(conn, handle_wanted, destination_pid)
close(handle_wanted)
conn.close()
except:
if not is_exiting():
sub_warning('thread for sharing handles raised exception',
exc_info=True)
#
# Functions to be used for pickling/unpickling objects with handles
#
def reduce_handle(handle):
from ..forking import Popen, duplicate
if Popen.thread_is_spawning():
return (None, Popen.duplicate_for_child(handle), True)
dup_handle = duplicate(handle)
_cache.add(dup_handle)
sub_debug('reducing handle %d', handle)
return (_get_listener().address, dup_handle, False)
def rebuild_handle(pickled_data):
from ..connection import Client
address, handle, inherited = pickled_data
if inherited:
return handle
sub_debug('rebuilding handle %d', handle)
conn = Client(address, authkey=current_process().authkey)
conn.send((handle, os.getpid()))
new_handle = recv_handle(conn)
conn.close()
return new_handle
#
# Register `_billiard.Connection` with `ForkingPickler`
#
def reduce_connection(conn):
rh = reduce_handle(conn.fileno())
return rebuild_connection, (rh, conn.readable, conn.writable)
def rebuild_connection(reduced_handle, readable, writable):
handle = rebuild_handle(reduced_handle)
return _billiard.Connection(
handle, readable=readable, writable=writable
)
# Register `socket.socket` with `ForkingPickler`
#
def fromfd(fd, family, type_, proto=0):
s = socket.fromfd(fd, family, type_, proto)
if s.__class__ is not socket.socket:
s = socket.socket(_sock=s)
return s
def reduce_socket(s):
reduced_handle = reduce_handle(s.fileno())
return rebuild_socket, (reduced_handle, s.family, s.type, s.proto)
def rebuild_socket(reduced_handle, family, type_, proto):
fd = rebuild_handle(reduced_handle)
_sock = fromfd(fd, family, type_, proto)
close(fd)
return _sock
ForkingPickler.register(socket.socket, reduce_socket)
#
# Register `_billiard.PipeConnection` with `ForkingPickler`
#
if sys.platform == 'win32':
def reduce_pipe_connection(conn):
rh = reduce_handle(conn.fileno())
return rebuild_pipe_connection, (rh, conn.readable, conn.writable)
def rebuild_pipe_connection(reduced_handle, readable, writable):
handle = rebuild_handle(reduced_handle)
return _billiard.PipeConnection(
handle, readable=readable, writable=writable
)

0
billiard/py3/__init__.py Normal file
View File

965
billiard/py3/connection.py Normal file
View File

@ -0,0 +1,965 @@
#
# A higher level module for using sockets (or Windows named pipes)
#
# multiprocessing/connection.py
#
# Copyright (c) 2006-2008, R Oudkerk
# Licensed to PSF under a Contributor Agreement.
#
from __future__ import absolute_import
__all__ = ['Client', 'Listener', 'Pipe', 'wait']
import io
import os
import sys
import select
import socket
import struct
import errno
import tempfile
import itertools
import _multiprocessing
from ..compat import setblocking
from ..exceptions import AuthenticationError, BufferTooShort
from ..five import monotonic
from ..util import get_temp_dir, Finalize, sub_debug
from ..reduction import ForkingPickler
try:
import _winapi
from _winapi import (
WAIT_OBJECT_0,
WAIT_TIMEOUT,
INFINITE,
)
# if we got here, we seem to be running on Windows. Handle probably
# missing WAIT_ABANDONED_0 constant:
try:
from _winapi import WAIT_ABANDONED_0
except ImportError:
# _winapi seems to be not exporting
# this constant, fallback solution until
# exported in _winapio
WAIT_ABANDONED_0 = 128
except ImportError:
if sys.platform == 'win32':
raise
_winapi = None
#
#
#
BUFSIZE = 8192
# A very generous timeout when it comes to local connections...
CONNECTION_TIMEOUT = 20.
_mmap_counter = itertools.count()
default_family = 'AF_INET'
families = ['AF_INET']
if hasattr(socket, 'AF_UNIX'):
default_family = 'AF_UNIX'
families += ['AF_UNIX']
if sys.platform == 'win32':
default_family = 'AF_PIPE'
families += ['AF_PIPE']
def _init_timeout(timeout=CONNECTION_TIMEOUT):
return monotonic() + timeout
def _check_timeout(t):
return monotonic() > t
def arbitrary_address(family):
'''
Return an arbitrary free address for the given family
'''
if family == 'AF_INET':
return ('localhost', 0)
elif family == 'AF_UNIX':
return tempfile.mktemp(prefix='listener-', dir=get_temp_dir())
elif family == 'AF_PIPE':
return tempfile.mktemp(prefix=r'\\.\pipe\pyc-%d-%d-' %
(os.getpid(), next(_mmap_counter)))
else:
raise ValueError('unrecognized family')
def _validate_family(family):
'''
Checks if the family is valid for the current environment.
'''
if sys.platform != 'win32' and family == 'AF_PIPE':
raise ValueError('Family %s is not recognized.' % family)
if sys.platform == 'win32' and family == 'AF_UNIX':
# double check
if not hasattr(socket, family):
raise ValueError('Family %s is not recognized.' % family)
def address_type(address):
'''
Return the types of the address
This can be 'AF_INET', 'AF_UNIX', or 'AF_PIPE'
'''
if type(address) == tuple:
return 'AF_INET'
elif type(address) is str and address.startswith('\\\\'):
return 'AF_PIPE'
elif type(address) is str:
return 'AF_UNIX'
else:
raise ValueError('address type of %r unrecognized' % address)
#
# Connection classes
#
class _ConnectionBase:
_handle = None
def __init__(self, handle, readable=True, writable=True):
handle = handle.__index__()
if handle < 0:
raise ValueError("invalid handle")
if not readable and not writable:
raise ValueError(
"at least one of `readable` and `writable` must be True")
self._handle = handle
self._readable = readable
self._writable = writable
# XXX should we use util.Finalize instead of a __del__?
def __del__(self):
if self._handle is not None:
self._close()
def _check_closed(self):
if self._handle is None:
raise OSError("handle is closed")
def _check_readable(self):
if not self._readable:
raise OSError("connection is write-only")
def _check_writable(self):
if not self._writable:
raise OSError("connection is read-only")
def _bad_message_length(self):
if self._writable:
self._readable = False
else:
self.close()
raise OSError("bad message length")
@property
def closed(self):
"""True if the connection is closed"""
return self._handle is None
@property
def readable(self):
"""True if the connection is readable"""
return self._readable
@property
def writable(self):
"""True if the connection is writable"""
return self._writable
def fileno(self):
"""File descriptor or handle of the connection"""
self._check_closed()
return self._handle
def close(self):
"""Close the connection"""
if self._handle is not None:
try:
self._close()
finally:
self._handle = None
def send_bytes(self, buf, offset=0, size=None):
"""Send the bytes data from a bytes-like object"""
self._check_closed()
self._check_writable()
m = memoryview(buf)
# HACK for byte-indexing of non-bytewise buffers (e.g. array.array)
if m.itemsize > 1:
m = memoryview(bytes(m))
n = len(m)
if offset < 0:
raise ValueError("offset is negative")
if n < offset:
raise ValueError("buffer length < offset")
if size is None:
size = n - offset
elif size < 0:
raise ValueError("size is negative")
elif offset + size > n:
raise ValueError("buffer length < offset + size")
self._send_bytes(m[offset:offset + size])
def send(self, obj):
"""Send a (picklable) object"""
self._check_closed()
self._check_writable()
self._send_bytes(ForkingPickler.dumps(obj))
def recv_bytes(self, maxlength=None):
"""
Receive bytes data as a bytes object.
"""
self._check_closed()
self._check_readable()
if maxlength is not None and maxlength < 0:
raise ValueError("negative maxlength")
buf = self._recv_bytes(maxlength)
if buf is None:
self._bad_message_length()
return buf.getvalue()
def recv_bytes_into(self, buf, offset=0):
"""
Receive bytes data into a writeable buffer-like object.
Return the number of bytes read.
"""
self._check_closed()
self._check_readable()
with memoryview(buf) as m:
# Get bytesize of arbitrary buffer
itemsize = m.itemsize
bytesize = itemsize * len(m)
if offset < 0:
raise ValueError("negative offset")
elif offset > bytesize:
raise ValueError("offset too large")
result = self._recv_bytes()
size = result.tell()
if bytesize < offset + size:
raise BufferTooShort(result.getvalue())
# Message can fit in dest
result.seek(0)
result.readinto(
m[offset // itemsize:(offset + size) // itemsize]
)
return size
def recv_payload(self):
return self._recv_bytes().getbuffer()
def recv(self):
"""Receive a (picklable) object"""
self._check_closed()
self._check_readable()
buf = self._recv_bytes()
return ForkingPickler.loads(buf.getbuffer())
def poll(self, timeout=0.0):
"""Whether there is any input available to be read"""
self._check_closed()
self._check_readable()
return self._poll(timeout)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, exc_tb):
self.close()
if _winapi:
class PipeConnection(_ConnectionBase):
"""
Connection class based on a Windows named pipe.
Overlapped I/O is used, so the handles must have been created
with FILE_FLAG_OVERLAPPED.
"""
_got_empty_message = False
def _close(self, _CloseHandle=_winapi.CloseHandle):
_CloseHandle(self._handle)
def _send_bytes(self, buf):
ov, err = _winapi.WriteFile(self._handle, buf, overlapped=True)
try:
if err == _winapi.ERROR_IO_PENDING:
waitres = _winapi.WaitForMultipleObjects(
[ov.event], False, INFINITE)
assert waitres == WAIT_OBJECT_0
except:
ov.cancel()
raise
finally:
nwritten, err = ov.GetOverlappedResult(True)
assert err == 0
assert nwritten == len(buf)
def _recv_bytes(self, maxsize=None):
if self._got_empty_message:
self._got_empty_message = False
return io.BytesIO()
else:
bsize = 128 if maxsize is None else min(maxsize, 128)
try:
ov, err = _winapi.ReadFile(self._handle, bsize,
overlapped=True)
try:
if err == _winapi.ERROR_IO_PENDING:
waitres = _winapi.WaitForMultipleObjects(
[ov.event], False, INFINITE)
assert waitres == WAIT_OBJECT_0
except:
ov.cancel()
raise
finally:
nread, err = ov.GetOverlappedResult(True)
if err == 0:
f = io.BytesIO()
f.write(ov.getbuffer())
return f
elif err == _winapi.ERROR_MORE_DATA:
return self._get_more_data(ov, maxsize)
except OSError as e:
if e.winerror == _winapi.ERROR_BROKEN_PIPE:
raise EOFError
else:
raise
raise RuntimeError(
"shouldn't get here; expected KeyboardInterrupt"
)
def _poll(self, timeout):
if (self._got_empty_message or
_winapi.PeekNamedPipe(self._handle)[0] != 0):
return True
return bool(wait([self], timeout))
def _get_more_data(self, ov, maxsize):
buf = ov.getbuffer()
f = io.BytesIO()
f.write(buf)
left = _winapi.PeekNamedPipe(self._handle)[1]
assert left > 0
if maxsize is not None and len(buf) + left > maxsize:
self._bad_message_length()
ov, err = _winapi.ReadFile(self._handle, left, overlapped=True)
rbytes, err = ov.GetOverlappedResult(True)
assert err == 0
assert rbytes == left
f.write(ov.getbuffer())
return f
class Connection(_ConnectionBase):
"""
Connection class based on an arbitrary file descriptor (Unix only), or
a socket handle (Windows).
"""
if _winapi:
def _close(self, _close=_multiprocessing.closesocket):
_close(self._handle)
_write = _multiprocessing.send
_read = _multiprocessing.recv
else:
def _close(self, _close=os.close): # noqa
_close(self._handle)
_write = os.write
_read = os.read
def send_offset(self, buf, offset, write=_write):
return write(self._handle, buf[offset:])
def _send(self, buf, write=_write):
remaining = len(buf)
while True:
try:
n = write(self._handle, buf)
except OSError as exc:
if exc.errno == errno.EINTR:
continue
raise
remaining -= n
if remaining == 0:
break
buf = buf[n:]
def setblocking(self, blocking):
setblocking(self._handle, blocking)
def _recv(self, size, read=_read):
buf = io.BytesIO()
handle = self._handle
remaining = size
while remaining > 0:
try:
chunk = read(handle, remaining)
except OSError as exc:
if exc.errno == errno.EINTR:
continue
raise
n = len(chunk)
if n == 0:
if remaining == size:
raise EOFError
else:
raise OSError("got end of file during message")
buf.write(chunk)
remaining -= n
return buf
def _send_bytes(self, buf):
# For wire compatibility with 3.2 and lower
n = len(buf)
self._send(struct.pack("!i", n))
# The condition is necessary to avoid "broken pipe" errors
# when sending a 0-length buffer if the other end closed the pipe.
if n > 0:
self._send(buf)
def _recv_bytes(self, maxsize=None):
buf = self._recv(4)
size, = struct.unpack("!i", buf.getvalue())
if maxsize is not None and size > maxsize:
return None
return self._recv(size)
def _poll(self, timeout):
r = wait([self], timeout)
return bool(r)
#
# Public functions
#
class Listener(object):
'''
Returns a listener object.
This is a wrapper for a bound socket which is 'listening' for
connections, or for a Windows named pipe.
'''
def __init__(self, address=None, family=None, backlog=1, authkey=None):
family = (family or (address and address_type(address))
or default_family)
address = address or arbitrary_address(family)
_validate_family(family)
if family == 'AF_PIPE':
self._listener = PipeListener(address, backlog)
else:
self._listener = SocketListener(address, family, backlog)
if authkey is not None and not isinstance(authkey, bytes):
raise TypeError('authkey should be a byte string')
self._authkey = authkey
def accept(self):
'''
Accept a connection on the bound socket or named pipe of `self`.
Returns a `Connection` object.
'''
if self._listener is None:
raise OSError('listener is closed')
c = self._listener.accept()
if self._authkey:
deliver_challenge(c, self._authkey)
answer_challenge(c, self._authkey)
return c
def close(self):
'''
Close the bound socket or named pipe of `self`.
'''
if self._listener is not None:
self._listener.close()
self._listener = None
address = property(lambda self: self._listener._address)
last_accepted = property(lambda self: self._listener._last_accepted)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, exc_tb):
self.close()
def Client(address, family=None, authkey=None):
'''
Returns a connection to the address of a `Listener`
'''
family = family or address_type(address)
_validate_family(family)
if family == 'AF_PIPE':
c = PipeClient(address)
else:
c = SocketClient(address)
if authkey is not None and not isinstance(authkey, bytes):
raise TypeError('authkey should be a byte string')
if authkey is not None:
answer_challenge(c, authkey)
deliver_challenge(c, authkey)
return c
if sys.platform != 'win32':
def Pipe(duplex=True, rnonblock=False, wnonblock=False):
'''
Returns pair of connection objects at either end of a pipe
'''
if duplex:
s1, s2 = socket.socketpair()
s1.setblocking(not rnonblock)
s2.setblocking(not wnonblock)
c1 = Connection(s1.detach())
c2 = Connection(s2.detach())
else:
fd1, fd2 = os.pipe()
if rnonblock:
setblocking(fd1, 0)
if wnonblock:
setblocking(fd2, 0)
c1 = Connection(fd1, writable=False)
c2 = Connection(fd2, readable=False)
return c1, c2
else:
from billiard.forking import duplicate
def Pipe(duplex=True, rnonblock=False, wnonblock=False): # noqa
'''
Returns pair of connection objects at either end of a pipe
'''
address = arbitrary_address('AF_PIPE')
if duplex:
openmode = _winapi.PIPE_ACCESS_DUPLEX
access = _winapi.GENERIC_READ | _winapi.GENERIC_WRITE
obsize, ibsize = BUFSIZE, BUFSIZE
else:
openmode = _winapi.PIPE_ACCESS_INBOUND
access = _winapi.GENERIC_WRITE
obsize, ibsize = 0, BUFSIZE
h1 = _winapi.CreateNamedPipe(
address, openmode | _winapi.FILE_FLAG_OVERLAPPED |
_winapi.FILE_FLAG_FIRST_PIPE_INSTANCE,
_winapi.PIPE_TYPE_MESSAGE | _winapi.PIPE_READMODE_MESSAGE |
_winapi.PIPE_WAIT,
1, obsize, ibsize, _winapi.NMPWAIT_WAIT_FOREVER, _winapi.NULL
)
h2 = _winapi.CreateFile(
address, access, 0, _winapi.NULL, _winapi.OPEN_EXISTING,
_winapi.FILE_FLAG_OVERLAPPED, _winapi.NULL
)
_winapi.SetNamedPipeHandleState(
h2, _winapi.PIPE_READMODE_MESSAGE, None, None
)
overlapped = _winapi.ConnectNamedPipe(h1, overlapped=True)
_, err = overlapped.GetOverlappedResult(True)
assert err == 0
c1 = PipeConnection(duplicate(h1, inheritable=True), writable=duplex)
c2 = PipeConnection(duplicate(h2, inheritable=True), readable=duplex)
_winapi.CloseHandle(h1)
_winapi.CloseHandle(h2)
return c1, c2
#
# Definitions for connections based on sockets
#
class SocketListener(object):
'''
Representation of a socket which is bound to an address and listening
'''
def __init__(self, address, family, backlog=1):
self._socket = socket.socket(getattr(socket, family))
try:
# SO_REUSEADDR has different semantics on Windows (issue #2550).
if os.name == 'posix':
self._socket.setsockopt(socket.SOL_SOCKET,
socket.SO_REUSEADDR, 1)
self._socket.setblocking(True)
self._socket.bind(address)
self._socket.listen(backlog)
self._address = self._socket.getsockname()
except OSError:
self._socket.close()
raise
self._family = family
self._last_accepted = None
if family == 'AF_UNIX':
self._unlink = Finalize(
self, os.unlink, args=(address, ), exitpriority=0
)
else:
self._unlink = None
def accept(self):
while True:
try:
s, self._last_accepted = self._socket.accept()
except OSError as exc:
if exc.errno == errno.EINTR:
continue
raise
else:
break
s.setblocking(True)
return Connection(s.detach())
def close(self):
self._socket.close()
if self._unlink is not None:
self._unlink()
def SocketClient(address):
'''
Return a connection object connected to the socket given by `address`
'''
family = address_type(address)
with socket.socket(getattr(socket, family)) as s:
s.setblocking(True)
s.connect(address)
return Connection(s.detach())
#
# Definitions for connections based on named pipes
#
if sys.platform == 'win32':
class PipeListener(object):
'''
Representation of a named pipe
'''
def __init__(self, address, backlog=None):
self._address = address
self._handle_queue = [self._new_handle(first=True)]
self._last_accepted = None
sub_debug('listener created with address=%r', self._address)
self.close = Finalize(
self, PipeListener._finalize_pipe_listener,
args=(self._handle_queue, self._address), exitpriority=0
)
def _new_handle(self, first=False):
flags = _winapi.PIPE_ACCESS_DUPLEX | _winapi.FILE_FLAG_OVERLAPPED
if first:
flags |= _winapi.FILE_FLAG_FIRST_PIPE_INSTANCE
return _winapi.CreateNamedPipe(
self._address, flags,
_winapi.PIPE_TYPE_MESSAGE | _winapi.PIPE_READMODE_MESSAGE |
_winapi.PIPE_WAIT,
_winapi.PIPE_UNLIMITED_INSTANCES, BUFSIZE, BUFSIZE,
_winapi.NMPWAIT_WAIT_FOREVER, _winapi.NULL
)
def accept(self):
self._handle_queue.append(self._new_handle())
handle = self._handle_queue.pop(0)
try:
ov = _winapi.ConnectNamedPipe(handle, overlapped=True)
except OSError as e:
if e.winerror != _winapi.ERROR_NO_DATA:
raise
# ERROR_NO_DATA can occur if a client has already connected,
# written data and then disconnected -- see Issue 14725.
else:
try:
_winapi.WaitForMultipleObjects([ov.event], False, INFINITE)
except:
ov.cancel()
_winapi.CloseHandle(handle)
raise
finally:
_, err = ov.GetOverlappedResult(True)
assert err == 0
return PipeConnection(handle)
@staticmethod
def _finalize_pipe_listener(queue, address):
sub_debug('closing listener with address=%r', address)
for handle in queue:
_winapi.CloseHandle(handle)
def PipeClient(address,
errors=(_winapi.ERROR_SEM_TIMEOUT,
_winapi.ERROR_PIPE_BUSY)):
'''
Return a connection object connected to the pipe given by `address`
'''
t = _init_timeout()
while 1:
try:
_winapi.WaitNamedPipe(address, 1000)
h = _winapi.CreateFile(
address, _winapi.GENERIC_READ | _winapi.GENERIC_WRITE,
0, _winapi.NULL, _winapi.OPEN_EXISTING,
_winapi.FILE_FLAG_OVERLAPPED, _winapi.NULL
)
except OSError as e:
if e.winerror not in errors or _check_timeout(t):
raise
else:
break
else:
raise
_winapi.SetNamedPipeHandleState(
h, _winapi.PIPE_READMODE_MESSAGE, None, None
)
return PipeConnection(h)
#
# Authentication stuff
#
MESSAGE_LENGTH = 20
CHALLENGE = b'#CHALLENGE#'
WELCOME = b'#WELCOME#'
FAILURE = b'#FAILURE#'
def deliver_challenge(connection, authkey):
import hmac
assert isinstance(authkey, bytes)
message = os.urandom(MESSAGE_LENGTH)
connection.send_bytes(CHALLENGE + message)
digest = hmac.new(authkey, message).digest()
response = connection.recv_bytes(256) # reject large message
if response == digest:
connection.send_bytes(WELCOME)
else:
connection.send_bytes(FAILURE)
raise AuthenticationError('digest received was wrong')
def answer_challenge(connection, authkey):
import hmac
assert isinstance(authkey, bytes)
message = connection.recv_bytes(256) # reject large message
assert message[:len(CHALLENGE)] == CHALLENGE, 'message = %r' % message
message = message[len(CHALLENGE):]
digest = hmac.new(authkey, message).digest()
connection.send_bytes(digest)
response = connection.recv_bytes(256) # reject large message
if response != WELCOME:
raise AuthenticationError('digest sent was rejected')
#
# Support for using xmlrpclib for serialization
#
class ConnectionWrapper(object):
def __init__(self, conn, dumps, loads):
self._conn = conn
self._dumps = dumps
self._loads = loads
for attr in ('fileno', 'close', 'poll', 'recv_bytes', 'send_bytes'):
obj = getattr(conn, attr)
setattr(self, attr, obj)
def send(self, obj):
s = self._dumps(obj)
self._conn.send_bytes(s)
def recv(self):
s = self._conn.recv_bytes()
return self._loads(s)
def _xml_dumps(obj):
return xmlrpclib.dumps((obj,), None, None, None, 1).encode('utf-8') # noqa
def _xml_loads(s):
(obj,), method = xmlrpclib.loads(s.decode('utf-8')) # noqa
return obj
class XmlListener(Listener):
def accept(self):
global xmlrpclib
import xmlrpc.client as xmlrpclib # noqa
obj = Listener.accept(self)
return ConnectionWrapper(obj, _xml_dumps, _xml_loads)
def XmlClient(*args, **kwds):
global xmlrpclib
import xmlrpc.client as xmlrpclib # noqa
return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads)
#
# Wait
#
if sys.platform == 'win32':
def _exhaustive_wait(handles, timeout):
# Return ALL handles which are currently signalled. (Only
# returning the first signalled might create starvation issues.)
L = list(handles)
ready = []
while L:
res = _winapi.WaitForMultipleObjects(L, False, timeout)
if res == WAIT_TIMEOUT:
break
elif WAIT_OBJECT_0 <= res < WAIT_OBJECT_0 + len(L):
res -= WAIT_OBJECT_0
elif WAIT_ABANDONED_0 <= res < WAIT_ABANDONED_0 + len(L):
res -= WAIT_ABANDONED_0
else:
raise RuntimeError('Should not get here')
ready.append(L[res])
L = L[res+1:]
timeout = 0
return ready
_ready_errors = {_winapi.ERROR_BROKEN_PIPE, _winapi.ERROR_NETNAME_DELETED}
def wait(object_list, timeout=None):
'''
Wait till an object in object_list is ready/readable.
Returns list of those objects in object_list which are ready/readable.
'''
if timeout is None:
timeout = INFINITE
elif timeout < 0:
timeout = 0
else:
timeout = int(timeout * 1000 + 0.5)
object_list = list(object_list)
waithandle_to_obj = {}
ov_list = []
ready_objects = set()
ready_handles = set()
try:
for o in object_list:
try:
fileno = getattr(o, 'fileno')
except AttributeError:
waithandle_to_obj[o.__index__()] = o
else:
# start an overlapped read of length zero
try:
ov, err = _winapi.ReadFile(fileno(), 0, True)
except OSError as e:
err = e.winerror
if err not in _ready_errors:
raise
if err == _winapi.ERROR_IO_PENDING:
ov_list.append(ov)
waithandle_to_obj[ov.event] = o
else:
# If o.fileno() is an overlapped pipe handle and
# err == 0 then there is a zero length message
# in the pipe, but it HAS NOT been consumed.
ready_objects.add(o)
timeout = 0
ready_handles = _exhaustive_wait(waithandle_to_obj.keys(), timeout)
finally:
# request that overlapped reads stop
for ov in ov_list:
ov.cancel()
# wait for all overlapped reads to stop
for ov in ov_list:
try:
_, err = ov.GetOverlappedResult(True)
except OSError as e:
err = e.winerror
if err not in _ready_errors:
raise
if err != _winapi.ERROR_OPERATION_ABORTED:
o = waithandle_to_obj[ov.event]
ready_objects.add(o)
if err == 0:
# If o.fileno() is an overlapped pipe handle then
# a zero length message HAS been consumed.
if hasattr(o, '_got_empty_message'):
o._got_empty_message = True
ready_objects.update(waithandle_to_obj[h] for h in ready_handles)
return [oj for oj in object_list if oj in ready_objects]
else:
if hasattr(select, 'poll'):
def _poll(fds, timeout):
if timeout is not None:
timeout = int(timeout * 1000) # timeout is in milliseconds
fd_map = {}
pollster = select.poll()
for fd in fds:
pollster.register(fd, select.POLLIN)
if hasattr(fd, 'fileno'):
fd_map[fd.fileno()] = fd
else:
fd_map[fd] = fd
ls = []
for fd, event in pollster.poll(timeout):
if event & select.POLLNVAL:
raise ValueError('invalid file descriptor %i' % fd)
ls.append(fd_map[fd])
return ls
else:
def _poll(fds, timeout): # noqa
return select.select(fds, [], [], timeout)[0]
def wait(object_list, timeout=None): # noqa
'''
Wait till an object in object_list is ready/readable.
Returns list of those objects in object_list which are ready/readable.
'''
if timeout is not None:
if timeout <= 0:
return _poll(object_list, 0)
else:
deadline = monotonic() + timeout
while True:
try:
return _poll(object_list, timeout)
except OSError as e:
if e.errno != errno.EINTR:
raise
if timeout is not None:
timeout = deadline - monotonic()

249
billiard/py3/reduction.py Normal file
View File

@ -0,0 +1,249 @@
#
# Module which deals with pickling of objects.
#
# multiprocessing/reduction.py
#
# Copyright (c) 2006-2008, R Oudkerk
# Licensed to PSF under a Contributor Agreement.
#
from __future__ import absolute_import
import copyreg
import functools
import io
import os
import pickle
import socket
import sys
__all__ = ['send_handle', 'recv_handle', 'ForkingPickler', 'register', 'dump']
HAVE_SEND_HANDLE = (sys.platform == 'win32' or
(hasattr(socket, 'CMSG_LEN') and
hasattr(socket, 'SCM_RIGHTS') and
hasattr(socket.socket, 'sendmsg')))
#
# Pickler subclass
#
class ForkingPickler(pickle.Pickler):
'''Pickler subclass used by multiprocessing.'''
_extra_reducers = {}
_copyreg_dispatch_table = copyreg.dispatch_table
def __init__(self, *args):
super().__init__(*args)
self.dispatch_table = self._copyreg_dispatch_table.copy()
self.dispatch_table.update(self._extra_reducers)
@classmethod
def register(cls, type, reduce):
'''Register a reduce function for a type.'''
cls._extra_reducers[type] = reduce
@classmethod
def dumps(cls, obj, protocol=None):
buf = io.BytesIO()
cls(buf, protocol).dump(obj)
return buf.getbuffer()
loads = pickle.loads
register = ForkingPickler.register
def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
ForkingPickler(file, protocol).dump(obj)
#
# Platform specific definitions
#
if sys.platform == 'win32':
# Windows
__all__ += ['DupHandle', 'duplicate', 'steal_handle']
import _winapi
def duplicate(handle, target_process=None, inheritable=False):
'''Duplicate a handle. (target_process is a handle not a pid!)'''
if target_process is None:
target_process = _winapi.GetCurrentProcess()
return _winapi.DuplicateHandle(
_winapi.GetCurrentProcess(), handle, target_process,
0, inheritable, _winapi.DUPLICATE_SAME_ACCESS)
def steal_handle(source_pid, handle):
'''Steal a handle from process identified by source_pid.'''
source_process_handle = _winapi.OpenProcess(
_winapi.PROCESS_DUP_HANDLE, False, source_pid)
try:
return _winapi.DuplicateHandle(
source_process_handle, handle,
_winapi.GetCurrentProcess(), 0, False,
_winapi.DUPLICATE_SAME_ACCESS | _winapi.DUPLICATE_CLOSE_SOURCE)
finally:
_winapi.CloseHandle(source_process_handle)
def send_handle(conn, handle, destination_pid):
'''Send a handle over a local connection.'''
dh = DupHandle(handle, _winapi.DUPLICATE_SAME_ACCESS, destination_pid)
conn.send(dh)
def recv_handle(conn):
'''Receive a handle over a local connection.'''
return conn.recv().detach()
class DupHandle(object):
'''Picklable wrapper for a handle.'''
def __init__(self, handle, access, pid=None):
if pid is None:
# We just duplicate the handle in the current process and
# let the receiving process steal the handle.
pid = os.getpid()
proc = _winapi.OpenProcess(_winapi.PROCESS_DUP_HANDLE, False, pid)
try:
self._handle = _winapi.DuplicateHandle(
_winapi.GetCurrentProcess(),
handle, proc, access, False, 0)
finally:
_winapi.CloseHandle(proc)
self._access = access
self._pid = pid
def detach(self):
'''Get the handle. This should only be called once.'''
# retrieve handle from process which currently owns it
if self._pid == os.getpid():
# The handle has already been duplicated for this process.
return self._handle
# We must steal the handle from the process whose pid is self._pid.
proc = _winapi.OpenProcess(_winapi.PROCESS_DUP_HANDLE, False,
self._pid)
try:
return _winapi.DuplicateHandle(
proc, self._handle, _winapi.GetCurrentProcess(),
self._access, False, _winapi.DUPLICATE_CLOSE_SOURCE)
finally:
_winapi.CloseHandle(proc)
else:
# Unix
__all__ += ['DupFd', 'sendfds', 'recvfds']
import array
# On MacOSX we should acknowledge receipt of fds -- see Issue14669
ACKNOWLEDGE = sys.platform == 'darwin'
def sendfds(sock, fds):
'''Send an array of fds over an AF_UNIX socket.'''
fds = array.array('i', fds)
msg = bytes([len(fds) % 256])
sock.sendmsg([msg], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, fds)])
if ACKNOWLEDGE and sock.recv(1) != b'A':
raise RuntimeError('did not receive acknowledgement of fd')
def recvfds(sock, size):
'''Receive an array of fds over an AF_UNIX socket.'''
a = array.array('i')
bytes_size = a.itemsize * size
msg, ancdata, flags, addr = sock.recvmsg(
1, socket.CMSG_LEN(bytes_size),
)
if not msg and not ancdata:
raise EOFError
try:
if ACKNOWLEDGE:
sock.send(b'A')
if len(ancdata) != 1:
raise RuntimeError(
'received %d items of ancdata' % len(ancdata),
)
cmsg_level, cmsg_type, cmsg_data = ancdata[0]
if (cmsg_level == socket.SOL_SOCKET and
cmsg_type == socket.SCM_RIGHTS):
if len(cmsg_data) % a.itemsize != 0:
raise ValueError
a.frombytes(cmsg_data)
assert len(a) % 256 == msg[0]
return list(a)
except (ValueError, IndexError):
pass
raise RuntimeError('Invalid data received')
def send_handle(conn, handle, destination_pid): # noqa
'''Send a handle over a local connection.'''
fd = conn.fileno()
with socket.fromfd(fd, socket.AF_UNIX, socket.SOCK_STREAM) as s:
sendfds(s, [handle])
def recv_handle(conn): # noqa
'''Receive a handle over a local connection.'''
fd = conn.fileno()
with socket.fromfd(fd, socket.AF_UNIX, socket.SOCK_STREAM) as s:
return recvfds(s, 1)[0]
def DupFd(fd):
'''Return a wrapper for an fd.'''
from ..forking import Popen
return Popen.duplicate_for_child(fd)
#
# Try making some callable types picklable
#
def _reduce_method(m):
if m.__self__ is None:
return getattr, (m.__class__, m.__func__.__name__)
else:
return getattr, (m.__self__, m.__func__.__name__)
class _C:
def f(self):
pass
register(type(_C().f), _reduce_method)
def _reduce_method_descriptor(m):
return getattr, (m.__objclass__, m.__name__)
register(type(list.append), _reduce_method_descriptor)
register(type(int.__add__), _reduce_method_descriptor)
def _reduce_partial(p):
return _rebuild_partial, (p.func, p.args, p.keywords or {})
def _rebuild_partial(func, args, keywords):
return functools.partial(func, *args, **keywords)
register(functools.partial, _reduce_partial)
#
# Make sockets picklable
#
if sys.platform == 'win32':
def _reduce_socket(s):
from ..resource_sharer import DupSocket
return _rebuild_socket, (DupSocket(s),)
def _rebuild_socket(ds):
return ds.detach()
register(socket.socket, _reduce_socket)
else:
def _reduce_socket(s): # noqa
df = DupFd(s.fileno())
return _rebuild_socket, (df, s.family, s.type, s.proto)
def _rebuild_socket(df, family, type, proto): # noqa
fd = df.detach()
return socket.socket(family, type, proto, fileno=fd)
register(socket.socket, _reduce_socket)

372
billiard/queues.py Normal file
View File

@ -0,0 +1,372 @@
#
# Module implementing queues
#
# multiprocessing/queues.py
#
# Copyright (c) 2006-2008, R Oudkerk
# Licensed to PSF under a Contributor Agreement.
#
from __future__ import absolute_import
__all__ = ['Queue', 'SimpleQueue', 'JoinableQueue']
import sys
import os
import threading
import collections
import weakref
import errno
from . import Pipe
from ._ext import _billiard
from .compat import get_errno
from .five import monotonic
from .synchronize import Lock, BoundedSemaphore, Semaphore, Condition
from .util import debug, error, info, Finalize, register_after_fork
from .five import Empty, Full
from .forking import assert_spawning
class Queue(object):
'''
Queue type using a pipe, buffer and thread
'''
def __init__(self, maxsize=0):
if maxsize <= 0:
maxsize = _billiard.SemLock.SEM_VALUE_MAX
self._maxsize = maxsize
self._reader, self._writer = Pipe(duplex=False)
self._rlock = Lock()
self._opid = os.getpid()
if sys.platform == 'win32':
self._wlock = None
else:
self._wlock = Lock()
self._sem = BoundedSemaphore(maxsize)
# For use by concurrent.futures
self._ignore_epipe = False
self._after_fork()
if sys.platform != 'win32':
register_after_fork(self, Queue._after_fork)
def __getstate__(self):
assert_spawning(self)
return (self._ignore_epipe, self._maxsize, self._reader, self._writer,
self._rlock, self._wlock, self._sem, self._opid)
def __setstate__(self, state):
(self._ignore_epipe, self._maxsize, self._reader, self._writer,
self._rlock, self._wlock, self._sem, self._opid) = state
self._after_fork()
def _after_fork(self):
debug('Queue._after_fork()')
self._notempty = threading.Condition(threading.Lock())
self._buffer = collections.deque()
self._thread = None
self._jointhread = None
self._joincancelled = False
self._closed = False
self._close = None
self._send = self._writer.send
self._recv = self._reader.recv
self._poll = self._reader.poll
def put(self, obj, block=True, timeout=None):
assert not self._closed
if not self._sem.acquire(block, timeout):
raise Full
with self._notempty:
if self._thread is None:
self._start_thread()
self._buffer.append(obj)
self._notempty.notify()
def get(self, block=True, timeout=None):
if block and timeout is None:
with self._rlock:
res = self._recv()
self._sem.release()
return res
else:
if block:
deadline = monotonic() + timeout
if not self._rlock.acquire(block, timeout):
raise Empty
try:
if block:
timeout = deadline - monotonic()
if timeout < 0 or not self._poll(timeout):
raise Empty
elif not self._poll():
raise Empty
res = self._recv()
self._sem.release()
return res
finally:
self._rlock.release()
def qsize(self):
# Raises NotImplementedError on Mac OSX because
# of broken sem_getvalue()
return self._maxsize - self._sem._semlock._get_value()
def empty(self):
return not self._poll()
def full(self):
return self._sem._semlock._is_zero()
def get_nowait(self):
return self.get(False)
def put_nowait(self, obj):
return self.put(obj, False)
def close(self):
self._closed = True
self._reader.close()
if self._close:
self._close()
def join_thread(self):
debug('Queue.join_thread()')
assert self._closed
if self._jointhread:
self._jointhread()
def cancel_join_thread(self):
debug('Queue.cancel_join_thread()')
self._joincancelled = True
try:
self._jointhread.cancel()
except AttributeError:
pass
def _start_thread(self):
debug('Queue._start_thread()')
# Start thread which transfers data from buffer to pipe
self._buffer.clear()
self._thread = threading.Thread(
target=Queue._feed,
args=(self._buffer, self._notempty, self._send,
self._wlock, self._writer.close, self._ignore_epipe),
name='QueueFeederThread'
)
self._thread.daemon = True
debug('doing self._thread.start()')
self._thread.start()
debug('... done self._thread.start()')
# On process exit we will wait for data to be flushed to pipe.
#
# However, if this process created the queue then all
# processes which use the queue will be descendants of this
# process. Therefore waiting for the queue to be flushed
# is pointless once all the child processes have been joined.
created_by_this_process = (self._opid == os.getpid())
if not self._joincancelled and not created_by_this_process:
self._jointhread = Finalize(
self._thread, Queue._finalize_join,
[weakref.ref(self._thread)],
exitpriority=-5
)
# Send sentinel to the thread queue object when garbage collected
self._close = Finalize(
self, Queue._finalize_close,
[self._buffer, self._notempty],
exitpriority=10
)
@staticmethod
def _finalize_join(twr):
debug('joining queue thread')
thread = twr()
if thread is not None:
thread.join()
debug('... queue thread joined')
else:
debug('... queue thread already dead')
@staticmethod
def _finalize_close(buffer, notempty):
debug('telling queue thread to quit')
with notempty:
buffer.append(_sentinel)
notempty.notify()
@staticmethod
def _feed(buffer, notempty, send, writelock, close, ignore_epipe):
debug('starting thread to feed data to pipe')
from .util import is_exiting
ncond = notempty
nwait = notempty.wait
bpopleft = buffer.popleft
sentinel = _sentinel
if sys.platform != 'win32':
wlock = writelock
else:
wlock = None
try:
while 1:
with ncond:
if not buffer:
nwait()
try:
while 1:
obj = bpopleft()
if obj is sentinel:
debug('feeder thread got sentinel -- exiting')
close()
return
if wlock is None:
send(obj)
else:
with wlock:
send(obj)
except IndexError:
pass
except Exception as exc:
if ignore_epipe and get_errno(exc) == errno.EPIPE:
return
# Since this runs in a daemon thread the resources it uses
# may be become unusable while the process is cleaning up.
# We ignore errors which happen after the process has
# started to cleanup.
try:
if is_exiting():
info('error in queue thread: %r', exc, exc_info=True)
else:
if not error('error in queue thread: %r', exc,
exc_info=True):
import traceback
traceback.print_exc()
except Exception:
pass
_sentinel = object()
class JoinableQueue(Queue):
'''
A queue type which also supports join() and task_done() methods
Note that if you do not call task_done() for each finished task then
eventually the counter's semaphore may overflow causing Bad Things
to happen.
'''
def __init__(self, maxsize=0):
Queue.__init__(self, maxsize)
self._unfinished_tasks = Semaphore(0)
self._cond = Condition()
def __getstate__(self):
return Queue.__getstate__(self) + (self._cond, self._unfinished_tasks)
def __setstate__(self, state):
Queue.__setstate__(self, state[:-2])
self._cond, self._unfinished_tasks = state[-2:]
def put(self, obj, block=True, timeout=None):
assert not self._closed
if not self._sem.acquire(block, timeout):
raise Full
with self._notempty:
with self._cond:
if self._thread is None:
self._start_thread()
self._buffer.append(obj)
self._unfinished_tasks.release()
self._notempty.notify()
def task_done(self):
with self._cond:
if not self._unfinished_tasks.acquire(False):
raise ValueError('task_done() called too many times')
if self._unfinished_tasks._semlock._is_zero():
self._cond.notify_all()
def join(self):
with self._cond:
if not self._unfinished_tasks._semlock._is_zero():
self._cond.wait()
class _SimpleQueue(object):
'''
Simplified Queue type -- really just a locked pipe
'''
def __init__(self, rnonblock=False, wnonblock=False):
self._reader, self._writer = Pipe(
duplex=False, rnonblock=rnonblock, wnonblock=wnonblock,
)
self._poll = self._reader.poll
self._rlock = self._wlock = None
self._make_methods()
def empty(self):
return not self._poll()
def __getstate__(self):
assert_spawning(self)
return (self._reader, self._writer, self._rlock, self._wlock)
def __setstate__(self, state):
(self._reader, self._writer, self._rlock, self._wlock) = state
self._make_methods()
def _make_methods(self):
recv = self._reader.recv
try:
recv_payload = self._reader.recv_payload
except AttributeError:
recv_payload = self._reader.recv_bytes
rlock = self._rlock
if rlock is not None:
def get():
with rlock:
return recv()
self.get = get
def get_payload():
with rlock:
return recv_payload()
self.get_payload = get_payload
else:
self.get = recv
self.get_payload = recv_payload
if self._wlock is None:
# writes to a message oriented win32 pipe are atomic
self.put = self._writer.send
else:
send = self._writer.send
wlock = self._wlock
def put(obj):
with wlock:
return send(obj)
self.put = put
class SimpleQueue(_SimpleQueue):
def __init__(self):
self._reader, self._writer = Pipe(duplex=False)
self._rlock = Lock()
self._wlock = Lock() if sys.platform != 'win32' else None
self._make_methods()

10
billiard/reduction.py Normal file
View File

@ -0,0 +1,10 @@
from __future__ import absolute_import
import sys
if sys.version_info[0] == 3:
from .py3 import reduction
else:
from .py2 import reduction # noqa
sys.modules[__name__] = reduction

248
billiard/sharedctypes.py Normal file
View File

@ -0,0 +1,248 @@
#
# Module which supports allocation of ctypes objects from shared memory
#
# multiprocessing/sharedctypes.py
#
# Copyright (c) 2006-2008, R Oudkerk
# Licensed to PSF under a Contributor Agreement.
#
from __future__ import absolute_import
import ctypes
import weakref
from . import heap, RLock
from .five import int_types
from .forking import assert_spawning
from .reduction import ForkingPickler
__all__ = ['RawValue', 'RawArray', 'Value', 'Array', 'copy', 'synchronized']
typecode_to_type = {
'c': ctypes.c_char, 'u': ctypes.c_wchar,
'b': ctypes.c_byte, 'B': ctypes.c_ubyte,
'h': ctypes.c_short, 'H': ctypes.c_ushort,
'i': ctypes.c_int, 'I': ctypes.c_uint,
'l': ctypes.c_long, 'L': ctypes.c_ulong,
'f': ctypes.c_float, 'd': ctypes.c_double
}
def _new_value(type_):
size = ctypes.sizeof(type_)
wrapper = heap.BufferWrapper(size)
return rebuild_ctype(type_, wrapper, None)
def RawValue(typecode_or_type, *args):
'''
Returns a ctypes object allocated from shared memory
'''
type_ = typecode_to_type.get(typecode_or_type, typecode_or_type)
obj = _new_value(type_)
ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj))
obj.__init__(*args)
return obj
def RawArray(typecode_or_type, size_or_initializer):
'''
Returns a ctypes array allocated from shared memory
'''
type_ = typecode_to_type.get(typecode_or_type, typecode_or_type)
if isinstance(size_or_initializer, int_types):
type_ = type_ * size_or_initializer
obj = _new_value(type_)
ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj))
return obj
else:
type_ = type_ * len(size_or_initializer)
result = _new_value(type_)
result.__init__(*size_or_initializer)
return result
def Value(typecode_or_type, *args, **kwds):
'''
Return a synchronization wrapper for a Value
'''
lock = kwds.pop('lock', None)
if kwds:
raise ValueError(
'unrecognized keyword argument(s): %s' % list(kwds.keys()))
obj = RawValue(typecode_or_type, *args)
if lock is False:
return obj
if lock in (True, None):
lock = RLock()
if not hasattr(lock, 'acquire'):
raise AttributeError("'%r' has no method 'acquire'" % lock)
return synchronized(obj, lock)
def Array(typecode_or_type, size_or_initializer, **kwds):
'''
Return a synchronization wrapper for a RawArray
'''
lock = kwds.pop('lock', None)
if kwds:
raise ValueError(
'unrecognized keyword argument(s): %s' % list(kwds.keys()))
obj = RawArray(typecode_or_type, size_or_initializer)
if lock is False:
return obj
if lock in (True, None):
lock = RLock()
if not hasattr(lock, 'acquire'):
raise AttributeError("'%r' has no method 'acquire'" % lock)
return synchronized(obj, lock)
def copy(obj):
new_obj = _new_value(type(obj))
ctypes.pointer(new_obj)[0] = obj
return new_obj
def synchronized(obj, lock=None):
assert not isinstance(obj, SynchronizedBase), 'object already synchronized'
if isinstance(obj, ctypes._SimpleCData):
return Synchronized(obj, lock)
elif isinstance(obj, ctypes.Array):
if obj._type_ is ctypes.c_char:
return SynchronizedString(obj, lock)
return SynchronizedArray(obj, lock)
else:
cls = type(obj)
try:
scls = class_cache[cls]
except KeyError:
names = [field[0] for field in cls._fields_]
d = dict((name, make_property(name)) for name in names)
classname = 'Synchronized' + cls.__name__
scls = class_cache[cls] = type(classname, (SynchronizedBase,), d)
return scls(obj, lock)
#
# Functions for pickling/unpickling
#
def reduce_ctype(obj):
assert_spawning(obj)
if isinstance(obj, ctypes.Array):
return rebuild_ctype, (obj._type_, obj._wrapper, obj._length_)
else:
return rebuild_ctype, (type(obj), obj._wrapper, None)
def rebuild_ctype(type_, wrapper, length):
if length is not None:
type_ = type_ * length
ForkingPickler.register(type_, reduce_ctype)
obj = type_.from_address(wrapper.get_address())
obj._wrapper = wrapper
return obj
#
# Function to create properties
#
def make_property(name):
try:
return prop_cache[name]
except KeyError:
d = {}
exec(template % ((name, ) * 7), d)
prop_cache[name] = d[name]
return d[name]
template = '''
def get%s(self):
self.acquire()
try:
return self._obj.%s
finally:
self.release()
def set%s(self, value):
self.acquire()
try:
self._obj.%s = value
finally:
self.release()
%s = property(get%s, set%s)
'''
prop_cache = {}
class_cache = weakref.WeakKeyDictionary()
#
# Synchronized wrappers
#
class SynchronizedBase(object):
def __init__(self, obj, lock=None):
self._obj = obj
self._lock = lock or RLock()
self.acquire = self._lock.acquire
self.release = self._lock.release
def __reduce__(self):
assert_spawning(self)
return synchronized, (self._obj, self._lock)
def get_obj(self):
return self._obj
def get_lock(self):
return self._lock
def __repr__(self):
return '<%s wrapper for %s>' % (type(self).__name__, self._obj)
class Synchronized(SynchronizedBase):
value = make_property('value')
class SynchronizedArray(SynchronizedBase):
def __len__(self):
return len(self._obj)
def __getitem__(self, i):
self.acquire()
try:
return self._obj[i]
finally:
self.release()
def __setitem__(self, i, value):
self.acquire()
try:
self._obj[i] = value
finally:
self.release()
def __getslice__(self, start, stop):
self.acquire()
try:
return self._obj[start:stop]
finally:
self.release()
def __setslice__(self, start, stop, values):
self.acquire()
try:
self._obj[start:stop] = values
finally:
self.release()
class SynchronizedString(SynchronizedArray):
value = make_property('value')
raw = make_property('raw')

449
billiard/synchronize.py Normal file
View File

@ -0,0 +1,449 @@
#
# Module implementing synchronization primitives
#
# multiprocessing/synchronize.py
#
# Copyright (c) 2006-2008, R Oudkerk
# Licensed to PSF under a Contributor Agreement.
#
from __future__ import absolute_import
__all__ = [
'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Condition', 'Event',
]
import itertools
import os
import signal
import sys
import threading
from ._ext import _billiard, ensure_SemLock
from .five import range, monotonic
from .process import current_process
from .util import Finalize, register_after_fork, debug
from .forking import assert_spawning, Popen
from .compat import bytes, closerange
# Try to import the mp.synchronize module cleanly, if it fails
# raise ImportError for platforms lacking a working sem_open implementation.
# See issue 3770
ensure_SemLock()
#
# Constants
#
RECURSIVE_MUTEX, SEMAPHORE = list(range(2))
SEM_VALUE_MAX = _billiard.SemLock.SEM_VALUE_MAX
try:
sem_unlink = _billiard.SemLock.sem_unlink
except AttributeError: # pragma: no cover
try:
# Py3.4+ implements sem_unlink and the semaphore must be named
from _multiprocessing import sem_unlink # noqa
except ImportError:
sem_unlink = None # noqa
#
# Base class for semaphores and mutexes; wraps `_billiard.SemLock`
#
def _semname(sl):
try:
return sl.name
except AttributeError:
pass
class SemLock(object):
_counter = itertools.count()
def __init__(self, kind, value, maxvalue):
from .forking import _forking_is_enabled
unlink_immediately = _forking_is_enabled or sys.platform == 'win32'
if sem_unlink:
sl = self._semlock = _billiard.SemLock(
kind, value, maxvalue, self._make_name(), unlink_immediately)
else:
sl = self._semlock = _billiard.SemLock(kind, value, maxvalue)
debug('created semlock with handle %s', sl.handle)
self._make_methods()
if sem_unlink:
if sys.platform != 'win32':
def _after_fork(obj):
obj._semlock._after_fork()
register_after_fork(self, _after_fork)
if _semname(self._semlock) is not None:
# We only get here if we are on Unix with forking
# disabled. When the object is garbage collected or the
# process shuts down we unlink the semaphore name
Finalize(self, sem_unlink, (self._semlock.name,),
exitpriority=0)
# In case of abnormal termination unlink semaphore name
_cleanup_semaphore_if_leaked(self._semlock.name)
def _make_methods(self):
self.acquire = self._semlock.acquire
self.release = self._semlock.release
def __enter__(self):
return self._semlock.__enter__()
def __exit__(self, *args):
return self._semlock.__exit__(*args)
def __getstate__(self):
assert_spawning(self)
sl = self._semlock
state = (Popen.duplicate_for_child(sl.handle), sl.kind, sl.maxvalue)
try:
state += (sl.name, )
except AttributeError:
pass
return state
def __setstate__(self, state):
self._semlock = _billiard.SemLock._rebuild(*state)
debug('recreated blocker with handle %r', state[0])
self._make_methods()
@staticmethod
def _make_name():
return '/%s-%s-%s' % (current_process()._semprefix,
os.getpid(), next(SemLock._counter))
class Semaphore(SemLock):
def __init__(self, value=1):
SemLock.__init__(self, SEMAPHORE, value, SEM_VALUE_MAX)
def get_value(self):
return self._semlock._get_value()
def __repr__(self):
try:
value = self._semlock._get_value()
except Exception:
value = 'unknown'
return '<Semaphore(value=%s)>' % value
class BoundedSemaphore(Semaphore):
def __init__(self, value=1):
SemLock.__init__(self, SEMAPHORE, value, value)
def __repr__(self):
try:
value = self._semlock._get_value()
except Exception:
value = 'unknown'
return '<BoundedSemaphore(value=%s, maxvalue=%s)>' % \
(value, self._semlock.maxvalue)
class Lock(SemLock):
'''
Non-recursive lock.
'''
def __init__(self):
SemLock.__init__(self, SEMAPHORE, 1, 1)
def __repr__(self):
try:
if self._semlock._is_mine():
name = current_process().name
if threading.currentThread().name != 'MainThread':
name += '|' + threading.currentThread().name
elif self._semlock._get_value() == 1:
name = 'None'
elif self._semlock._count() > 0:
name = 'SomeOtherThread'
else:
name = 'SomeOtherProcess'
except Exception:
name = 'unknown'
return '<Lock(owner=%s)>' % name
class RLock(SemLock):
'''
Recursive lock
'''
def __init__(self):
SemLock.__init__(self, RECURSIVE_MUTEX, 1, 1)
def __repr__(self):
try:
if self._semlock._is_mine():
name = current_process().name
if threading.currentThread().name != 'MainThread':
name += '|' + threading.currentThread().name
count = self._semlock._count()
elif self._semlock._get_value() == 1:
name, count = 'None', 0
elif self._semlock._count() > 0:
name, count = 'SomeOtherThread', 'nonzero'
else:
name, count = 'SomeOtherProcess', 'nonzero'
except Exception:
name, count = 'unknown', 'unknown'
return '<RLock(%s, %s)>' % (name, count)
class Condition(object):
'''
Condition variable
'''
def __init__(self, lock=None):
self._lock = lock or RLock()
self._sleeping_count = Semaphore(0)
self._woken_count = Semaphore(0)
self._wait_semaphore = Semaphore(0)
self._make_methods()
def __getstate__(self):
assert_spawning(self)
return (self._lock, self._sleeping_count,
self._woken_count, self._wait_semaphore)
def __setstate__(self, state):
(self._lock, self._sleeping_count,
self._woken_count, self._wait_semaphore) = state
self._make_methods()
def __enter__(self):
return self._lock.__enter__()
def __exit__(self, *args):
return self._lock.__exit__(*args)
def _make_methods(self):
self.acquire = self._lock.acquire
self.release = self._lock.release
def __repr__(self):
try:
num_waiters = (self._sleeping_count._semlock._get_value() -
self._woken_count._semlock._get_value())
except Exception:
num_waiters = 'unkown'
return '<Condition(%s, %s)>' % (self._lock, num_waiters)
def wait(self, timeout=None):
assert self._lock._semlock._is_mine(), \
'must acquire() condition before using wait()'
# indicate that this thread is going to sleep
self._sleeping_count.release()
# release lock
count = self._lock._semlock._count()
for i in range(count):
self._lock.release()
try:
# wait for notification or timeout
ret = self._wait_semaphore.acquire(True, timeout)
finally:
# indicate that this thread has woken
self._woken_count.release()
# reacquire lock
for i in range(count):
self._lock.acquire()
return ret
def notify(self):
assert self._lock._semlock._is_mine(), 'lock is not owned'
assert not self._wait_semaphore.acquire(False)
# to take account of timeouts since last notify() we subtract
# woken_count from sleeping_count and rezero woken_count
while self._woken_count.acquire(False):
res = self._sleeping_count.acquire(False)
assert res
if self._sleeping_count.acquire(False): # try grabbing a sleeper
self._wait_semaphore.release() # wake up one sleeper
self._woken_count.acquire() # wait for sleeper to wake
# rezero _wait_semaphore in case a timeout just happened
self._wait_semaphore.acquire(False)
def notify_all(self):
assert self._lock._semlock._is_mine(), 'lock is not owned'
assert not self._wait_semaphore.acquire(False)
# to take account of timeouts since last notify*() we subtract
# woken_count from sleeping_count and rezero woken_count
while self._woken_count.acquire(False):
res = self._sleeping_count.acquire(False)
assert res
sleepers = 0
while self._sleeping_count.acquire(False):
self._wait_semaphore.release() # wake up one sleeper
sleepers += 1
if sleepers:
for i in range(sleepers):
self._woken_count.acquire() # wait for a sleeper to wake
# rezero wait_semaphore in case some timeouts just happened
while self._wait_semaphore.acquire(False):
pass
def wait_for(self, predicate, timeout=None):
result = predicate()
if result:
return result
if timeout is not None:
endtime = monotonic() + timeout
else:
endtime = None
waittime = None
while not result:
if endtime is not None:
waittime = endtime - monotonic()
if waittime <= 0:
break
self.wait(waittime)
result = predicate()
return result
class Event(object):
def __init__(self):
self._cond = Condition(Lock())
self._flag = Semaphore(0)
def is_set(self):
self._cond.acquire()
try:
if self._flag.acquire(False):
self._flag.release()
return True
return False
finally:
self._cond.release()
def set(self):
self._cond.acquire()
try:
self._flag.acquire(False)
self._flag.release()
self._cond.notify_all()
finally:
self._cond.release()
def clear(self):
self._cond.acquire()
try:
self._flag.acquire(False)
finally:
self._cond.release()
def wait(self, timeout=None):
self._cond.acquire()
try:
if self._flag.acquire(False):
self._flag.release()
else:
self._cond.wait(timeout)
if self._flag.acquire(False):
self._flag.release()
return True
return False
finally:
self._cond.release()
if sys.platform != 'win32':
#
# Protection against unlinked semaphores if the program ends abnormally
# and forking has been disabled.
#
def _cleanup_semaphore_if_leaked(name):
name = name.encode('ascii') + bytes('\0', 'ascii')
if len(name) > 512:
# posix guarantees that writes to a pipe of less than PIPE_BUF
# bytes are atomic, and that PIPE_BUF >= 512
raise ValueError('name too long')
fd = _get_unlinkfd()
bits = os.write(fd, name)
assert bits == len(name)
def _get_unlinkfd():
cp = current_process()
if cp._unlinkfd is None:
r, w = os.pipe()
pid = os.fork()
if pid == 0:
try:
from setproctitle import setproctitle
setproctitle("[sem_cleanup for %r]" % cp.pid)
except:
pass
# Fork a process which will survive until all other processes
# which have a copy of the write end of the pipe have exited.
# The forked process just collects names of semaphores until
# EOF is indicated. Then it tries unlinking all the names it
# has collected.
_collect_names_then_unlink(r)
os._exit(0)
os.close(r)
cp._unlinkfd = w
return cp._unlinkfd
def _collect_names_then_unlink(r):
# protect the process from ^C and "killall python" etc
signal.signal(signal.SIGINT, signal.SIG_IGN)
signal.signal(signal.SIGTERM, signal.SIG_IGN)
# close all fds except r
try:
MAXFD = os.sysconf("SC_OPEN_MAX")
except:
MAXFD = 256
closerange(0, r)
closerange(r + 1, MAXFD)
# collect data written to pipe
data = []
while 1:
try:
s = os.read(r, 512)
except:
# XXX IO lock might be held at fork, so don't try
# printing unexpected exception - see issue 6721
pass
else:
if not s:
break
data.append(s)
# attempt to unlink each collected name
for name in bytes('', 'ascii').join(data).split(bytes('\0', 'ascii')):
try:
sem_unlink(name.decode('ascii'))
except:
# XXX IO lock might be held at fork, so don't try
# printing unexpected exception - see issue 6721
pass

View File

@ -0,0 +1,21 @@
from __future__ import absolute_import
import atexit
def teardown():
# Workaround for multiprocessing bug where logging
# is attempted after global already collected at shutdown.
cancelled = set()
try:
import multiprocessing.util
cancelled.add(multiprocessing.util._exit_function)
except (AttributeError, ImportError):
pass
try:
atexit._exithandlers[:] = [
e for e in atexit._exithandlers if e[0] not in cancelled
]
except AttributeError:
pass

85
billiard/tests/compat.py Normal file
View File

@ -0,0 +1,85 @@
from __future__ import absolute_import
import sys
class WarningMessage(object):
"""Holds the result of a single showwarning() call."""
_WARNING_DETAILS = ('message', 'category', 'filename', 'lineno', 'file',
'line')
def __init__(self, message, category, filename, lineno, file=None,
line=None):
local_values = locals()
for attr in self._WARNING_DETAILS:
setattr(self, attr, local_values[attr])
self._category_name = category and category.__name__ or None
def __str__(self):
return ('{message : %r, category : %r, filename : %r, lineno : %s, '
'line : %r}' % (self.message, self._category_name,
self.filename, self.lineno, self.line))
class catch_warnings(object):
"""A context manager that copies and restores the warnings filter upon
exiting the context.
The 'record' argument specifies whether warnings should be captured by a
custom implementation of warnings.showwarning() and be appended to a list
returned by the context manager. Otherwise None is returned by the context
manager. The objects appended to the list are arguments whose attributes
mirror the arguments to showwarning().
The 'module' argument is to specify an alternative module to the module
named 'warnings' and imported under that name. This argument is only
useful when testing the warnings module itself.
"""
def __init__(self, record=False, module=None):
"""Specify whether to record warnings and if an alternative module
should be used other than sys.modules['warnings'].
For compatibility with Python 3.0, please consider all arguments to be
keyword-only.
"""
self._record = record
self._module = module is None and sys.modules['warnings'] or module
self._entered = False
def __repr__(self):
args = []
if self._record:
args.append('record=True')
if self._module is not sys.modules['warnings']:
args.append('module=%r' % self._module)
name = type(self).__name__
return '%s(%s)' % (name, ', '.join(args))
def __enter__(self):
if self._entered:
raise RuntimeError('Cannot enter %r twice' % self)
self._entered = True
self._filters = self._module.filters
self._module.filters = self._filters[:]
self._showwarning = self._module.showwarning
if self._record:
log = []
def showwarning(*args, **kwargs):
log.append(WarningMessage(*args, **kwargs))
self._module.showwarning = showwarning
return log
def __exit__(self, *exc_info):
if not self._entered:
raise RuntimeError('Cannot exit %r without entering first' % self)
self._module.filters = self._filters
self._module.showwarning = self._showwarning

View File

@ -0,0 +1,98 @@
from __future__ import absolute_import
import os
import signal
from contextlib import contextmanager
from mock import call, patch, Mock
from time import time
from billiard.common import (
_shutdown_cleanup,
reset_signals,
restart_state,
)
from .utils import Case
def signo(name):
return getattr(signal, name)
@contextmanager
def termsigs(default, full):
from billiard import common
prev_def, common.TERMSIGS_DEFAULT = common.TERMSIGS_DEFAULT, default
prev_full, common.TERMSIGS_FULL = common.TERMSIGS_FULL, full
try:
yield
finally:
common.TERMSIGS_DEFAULT, common.TERMSIGS_FULL = prev_def, prev_full
class test_reset_signals(Case):
def test_shutdown_handler(self):
with patch('sys.exit') as exit:
_shutdown_cleanup(15, Mock())
self.assertTrue(exit.called)
self.assertEqual(os.WTERMSIG(exit.call_args[0][0]), 15)
def test_does_not_reset_ignored_signal(self, sigs=['SIGTERM']):
with self.assert_context(sigs, [], signal.SIG_IGN) as (_, SET):
self.assertFalse(SET.called)
def test_does_not_reset_if_current_is_None(self, sigs=['SIGTERM']):
with self.assert_context(sigs, [], None) as (_, SET):
self.assertFalse(SET.called)
def test_resets_for_SIG_DFL(self, sigs=['SIGTERM', 'SIGINT', 'SIGUSR1']):
with self.assert_context(sigs, [], signal.SIG_DFL) as (_, SET):
SET.assert_has_calls([
call(signo(sig), _shutdown_cleanup) for sig in sigs
])
def test_resets_for_obj(self, sigs=['SIGTERM', 'SIGINT', 'SIGUSR1']):
with self.assert_context(sigs, [], object()) as (_, SET):
SET.assert_has_calls([
call(signo(sig), _shutdown_cleanup) for sig in sigs
])
def test_handles_errors(self, sigs=['SIGTERM']):
for exc in (OSError(), AttributeError(),
ValueError(), RuntimeError()):
with self.assert_context(sigs, [], signal.SIG_DFL, exc) as (_, S):
self.assertTrue(S.called)
@contextmanager
def assert_context(self, default, full, get_returns=None, set_effect=None):
with termsigs(default, full):
with patch('signal.getsignal') as GET:
with patch('signal.signal') as SET:
GET.return_value = get_returns
SET.side_effect = set_effect
reset_signals()
GET.assert_has_calls([
call(signo(sig)) for sig in default
])
yield GET, SET
class test_restart_state(Case):
def test_raises(self):
s = restart_state(100, 1) # max 100 restarts in 1 second.
s.R = 99
s.step()
with self.assertRaises(s.RestartFreqExceeded):
s.step()
def test_time_passed_resets_counter(self):
s = restart_state(100, 10)
s.R, s.T = 100, time()
with self.assertRaises(s.RestartFreqExceeded):
s.step()
s.R, s.T = 100, time()
s.step(time() + 20)
self.assertEqual(s.R, 1)

View File

@ -0,0 +1,12 @@
from __future__ import absolute_import
import billiard
from .utils import Case
class test_billiard(Case):
def test_has_version(self):
self.assertTrue(billiard.__version__)
self.assertIsInstance(billiard.__version__, str)

145
billiard/tests/utils.py Normal file
View File

@ -0,0 +1,145 @@
from __future__ import absolute_import
import re
import sys
import warnings
try:
import unittest # noqa
unittest.skip
from unittest.util import safe_repr, unorderable_list_difference
except AttributeError:
import unittest2 as unittest # noqa
from unittest2.util import safe_repr, unorderable_list_difference # noqa
from billiard.five import string_t, items, values
from .compat import catch_warnings
# -- adds assertWarns from recent unittest2, not in Python 2.7.
class _AssertRaisesBaseContext(object):
def __init__(self, expected, test_case, callable_obj=None,
expected_regex=None):
self.expected = expected
self.failureException = test_case.failureException
self.obj_name = None
if isinstance(expected_regex, string_t):
expected_regex = re.compile(expected_regex)
self.expected_regex = expected_regex
class _AssertWarnsContext(_AssertRaisesBaseContext):
"""A context manager used to implement TestCase.assertWarns* methods."""
def __enter__(self):
# The __warningregistry__'s need to be in a pristine state for tests
# to work properly.
warnings.resetwarnings()
for v in values(sys.modules):
if getattr(v, '__warningregistry__', None):
v.__warningregistry__ = {}
self.warnings_manager = catch_warnings(record=True)
self.warnings = self.warnings_manager.__enter__()
warnings.simplefilter('always', self.expected)
return self
def __exit__(self, exc_type, exc_value, tb):
self.warnings_manager.__exit__(exc_type, exc_value, tb)
if exc_type is not None:
# let unexpected exceptions pass through
return
try:
exc_name = self.expected.__name__
except AttributeError:
exc_name = str(self.expected)
first_matching = None
for m in self.warnings:
w = m.message
if not isinstance(w, self.expected):
continue
if first_matching is None:
first_matching = w
if (self.expected_regex is not None and
not self.expected_regex.search(str(w))):
continue
# store warning for later retrieval
self.warning = w
self.filename = m.filename
self.lineno = m.lineno
return
# Now we simply try to choose a helpful failure message
if first_matching is not None:
raise self.failureException(
'%r does not match %r' % (
self.expected_regex.pattern, str(first_matching)))
if self.obj_name:
raise self.failureException(
'%s not triggered by %s' % (exc_name, self.obj_name))
else:
raise self.failureException('%s not triggered' % exc_name)
class Case(unittest.TestCase):
def assertWarns(self, expected_warning):
return _AssertWarnsContext(expected_warning, self, None)
def assertWarnsRegex(self, expected_warning, expected_regex):
return _AssertWarnsContext(expected_warning, self,
None, expected_regex)
def assertDictContainsSubset(self, expected, actual, msg=None):
missing, mismatched = [], []
for key, value in items(expected):
if key not in actual:
missing.append(key)
elif value != actual[key]:
mismatched.append('%s, expected: %s, actual: %s' % (
safe_repr(key), safe_repr(value),
safe_repr(actual[key])))
if not (missing or mismatched):
return
standard_msg = ''
if missing:
standard_msg = 'Missing: %s' % ','.join(map(safe_repr, missing))
if mismatched:
if standard_msg:
standard_msg += '; '
standard_msg += 'Mismatched values: %s' % (
','.join(mismatched))
self.fail(self._formatMessage(msg, standard_msg))
def assertItemsEqual(self, expected_seq, actual_seq, msg=None):
missing = unexpected = None
try:
expected = sorted(expected_seq)
actual = sorted(actual_seq)
except TypeError:
# Unsortable items (example: set(), complex(), ...)
expected = list(expected_seq)
actual = list(actual_seq)
missing, unexpected = unorderable_list_difference(
expected, actual)
else:
return self.assertSequenceEqual(expected, actual, msg=msg)
errors = []
if missing:
errors.append(
'Expected, but missing:\n %s' % (safe_repr(missing), ),
)
if unexpected:
errors.append(
'Unexpected, but present:\n %s' % (safe_repr(unexpected), ),
)
if errors:
standardMsg = '\n'.join(errors)
self.fail(self._formatMessage(msg, standardMsg))

152
billiard/util.py Normal file
View File

@ -0,0 +1,152 @@
#
# Module providing various facilities to other parts of the package
#
# billiard/util.py
#
# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt
# Licensed to PSF under a Contributor Agreement.
#
from __future__ import absolute_import
import errno
import functools
import atexit
from multiprocessing.util import ( # noqa
_afterfork_registry,
_afterfork_counter,
_exit_function,
_finalizer_registry,
_finalizer_counter,
Finalize,
ForkAwareLocal,
ForkAwareThreadLock,
get_temp_dir,
is_exiting,
register_after_fork,
_run_after_forkers,
_run_finalizers,
)
from .compat import get_errno
__all__ = [
'sub_debug', 'debug', 'info', 'sub_warning', 'get_logger',
'log_to_stderr', 'get_temp_dir', 'register_after_fork',
'is_exiting', 'Finalize', 'ForkAwareThreadLock', 'ForkAwareLocal',
'SUBDEBUG', 'SUBWARNING',
]
#
# Logging
#
NOTSET = 0
SUBDEBUG = 5
DEBUG = 10
INFO = 20
SUBWARNING = 25
ERROR = 40
LOGGER_NAME = 'multiprocessing'
DEFAULT_LOGGING_FORMAT = '[%(levelname)s/%(processName)s] %(message)s'
_logger = None
_log_to_stderr = False
def sub_debug(msg, *args, **kwargs):
if _logger:
_logger.log(SUBDEBUG, msg, *args, **kwargs)
def debug(msg, *args, **kwargs):
if _logger:
_logger.log(DEBUG, msg, *args, **kwargs)
return True
return False
def info(msg, *args, **kwargs):
if _logger:
_logger.log(INFO, msg, *args, **kwargs)
return True
return False
def sub_warning(msg, *args, **kwargs):
if _logger:
_logger.log(SUBWARNING, msg, *args, **kwargs)
return True
return False
def error(msg, *args, **kwargs):
if _logger:
_logger.log(ERROR, msg, *args, **kwargs)
return True
return False
def get_logger():
'''
Returns logger used by multiprocessing
'''
global _logger
import logging
logging._acquireLock()
try:
if not _logger:
_logger = logging.getLogger(LOGGER_NAME)
_logger.propagate = 0
logging.addLevelName(SUBDEBUG, 'SUBDEBUG')
logging.addLevelName(SUBWARNING, 'SUBWARNING')
# XXX multiprocessing should cleanup before logging
if hasattr(atexit, 'unregister'):
atexit.unregister(_exit_function)
atexit.register(_exit_function)
else:
atexit._exithandlers.remove((_exit_function, (), {}))
atexit._exithandlers.append((_exit_function, (), {}))
finally:
logging._releaseLock()
return _logger
def log_to_stderr(level=None):
'''
Turn on logging and add a handler which prints to stderr
'''
global _log_to_stderr
import logging
logger = get_logger()
formatter = logging.Formatter(DEFAULT_LOGGING_FORMAT)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
if level:
logger.setLevel(level)
_log_to_stderr = True
return _logger
def _eintr_retry(func):
'''
Automatic retry after EINTR.
'''
@functools.wraps(func)
def wrapped(*args, **kwargs):
while 1:
try:
return func(*args, **kwargs)
except OSError as exc:
if get_errno(exc) != errno.EINTR:
raise
return wrapped

5
funtests/__init__.py Normal file
View File

@ -0,0 +1,5 @@
import os
import sys
sys.path.insert(0, os.pardir)
sys.path.insert(0, os.getcwd())

59
funtests/setup.py Normal file
View File

@ -0,0 +1,59 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
try:
from setuptools import setup
from setuptools.command.install import install
except ImportError:
from ez_setup import use_setuptools
use_setuptools()
from setuptools import setup # noqa
from setuptools.command.install import install # noqa
class no_install(install):
def run(self, *args, **kwargs):
import sys
sys.stderr.write("""
-------------------------------------------------------
The billiard functional test suite cannot be installed.
-------------------------------------------------------
But you can execute the tests by running the command:
$ python setup.py test
""")
setup(
name='billiard-funtests',
version='DEV',
description='Functional test suite for billiard',
author='Ask Solem',
author_email='ask@celeryproject.org',
url='http://github.com/celery/billiard',
platforms=['any'],
packages=[],
data_files=[],
zip_safe=False,
cmdclass={'install': no_install},
test_suite='nose.collector',
build_requires=[
'nose',
'nose-cover3',
'unittest2',
'coverage>=3.0',
],
classifiers=[
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: C'
'License :: OSI Approved :: BSD License',
'Intended Audience :: Developers',
],
long_description='Do not install this package',
)

View File

@ -0,0 +1,7 @@
import os
import sys
sys.path.insert(0, os.path.join(os.getcwd(), os.pardir))
print(sys.path[0])
sys.path.insert(0, os.getcwd())
print(sys.path[0])

File diff suppressed because it is too large Load Diff

5
requirements/test-ci.txt Normal file
View File

@ -0,0 +1,5 @@
coverage>=3.0
redis
pymongo
SQLAlchemy
PyOpenSSL

4
requirements/test.txt Normal file
View File

@ -0,0 +1,4 @@
unittest2>=0.4.0
nose
nose-cover3
mock

3
requirements/test3.txt Normal file
View File

@ -0,0 +1,3 @@
nose
nose-cover3
mock

12
setup.cfg Normal file
View File

@ -0,0 +1,12 @@
[nosetests]
where = billiard/tests
cover3-branch = 1
cover3-html = 1
cover3-package = billiard
cover3-exclude = billiard.tests
[egg_info]
tag_build =
tag_date = 0
tag_svn_revision = 0

268
setup.py Normal file
View File

@ -0,0 +1,268 @@
from __future__ import print_function
import os
import sys
import glob
try:
from setuptools import setup, Extension, find_packages
except ImportError:
from distutils.core import setup, Extension, find_packages # noqa
from distutils import sysconfig
from distutils.errors import (
CCompilerError,
DistutilsExecError,
DistutilsPlatformError
)
HERE = os.path.dirname(os.path.abspath(__file__))
ext_errors = (CCompilerError, DistutilsExecError, DistutilsPlatformError)
if sys.platform == 'win32' and sys.version_info >= (2, 6):
# distutils.msvc9compiler can raise IOError if the compiler is missing
ext_errors += (IOError, )
is_jython = sys.platform.startswith('java')
is_pypy = hasattr(sys, 'pypy_version_info')
is_py3k = sys.version_info[0] == 3
BUILD_WARNING = """
-----------------------------------------------------------------------
WARNING: The C extensions could not be compiled
-----------------------------------------------------------------------
Maybe you do not have a C compiler installed on this system?
The reason was:
%s
This is just a warning as most of the functionality will work even
without the updated C extension. It will simply fallback to the
built-in _multiprocessing module. Most notably you will not be able to use
FORCE_EXECV on POSIX systems. If this is a problem for you then please
install a C compiler or fix the error(s) above.
-----------------------------------------------------------------------
"""
# -*- py3k -*-
extras = {}
# -*- Distribution Meta -*-
import re
re_meta = re.compile(r'__(\w+?)__\s*=\s*(.*)')
re_vers = re.compile(r'VERSION\s*=\s*\((.*?)\)')
re_doc = re.compile(r'^"""(.+?)"""')
rq = lambda s: s.strip("\"'")
def add_default(m):
attr_name, attr_value = m.groups()
return ((attr_name, rq(attr_value)), )
def add_version(m):
v = list(map(rq, m.groups()[0].split(', ')))
return (('VERSION', '.'.join(v[0:4]) + ''.join(v[4:])), )
def add_doc(m):
return (('doc', m.groups()[0]), )
pats = {re_meta: add_default,
re_vers: add_version,
re_doc: add_doc}
here = os.path.abspath(os.path.dirname(__file__))
meta_fh = open(os.path.join(here, 'billiard/__init__.py'))
try:
meta = {}
for line in meta_fh:
if line.strip() == '# -eof meta-':
break
for pattern, handler in pats.items():
m = pattern.match(line.strip())
if m:
meta.update(handler(m))
finally:
meta_fh.close()
if sys.version_info < (2, 5):
raise ValueError('Versions of Python before 2.5 are not supported')
if sys.platform == 'win32': # Windows
macros = dict()
libraries = ['ws2_32']
elif sys.platform.startswith('darwin'): # Mac OSX
macros = dict(
HAVE_SEM_OPEN=1,
HAVE_SEM_TIMEDWAIT=0,
HAVE_FD_TRANSFER=1,
HAVE_BROKEN_SEM_GETVALUE=1
)
libraries = []
elif sys.platform.startswith('cygwin'): # Cygwin
macros = dict(
HAVE_SEM_OPEN=1,
HAVE_SEM_TIMEDWAIT=1,
HAVE_FD_TRANSFER=0,
HAVE_BROKEN_SEM_UNLINK=1
)
libraries = []
elif sys.platform in ('freebsd4', 'freebsd5', 'freebsd6'):
# FreeBSD's P1003.1b semaphore support is very experimental
# and has many known problems. (as of June 2008)
macros = dict( # FreeBSD 4-6
HAVE_SEM_OPEN=0,
HAVE_SEM_TIMEDWAIT=0,
HAVE_FD_TRANSFER=1,
)
libraries = []
elif re.match('^(gnukfreebsd(8|9|10|11)|freebsd(7|8|9|0))', sys.platform):
macros = dict( # FreeBSD 7+ and GNU/kFreeBSD 8+
HAVE_SEM_OPEN=bool(
sysconfig.get_config_var('HAVE_SEM_OPEN') and not
bool(sysconfig.get_config_var('POSIX_SEMAPHORES_NOT_ENABLED'))
),
HAVE_SEM_TIMEDWAIT=1,
HAVE_FD_TRANSFER=1,
)
libraries = []
elif sys.platform.startswith('openbsd'):
macros = dict( # OpenBSD
HAVE_SEM_OPEN=0, # Not implemented
HAVE_SEM_TIMEDWAIT=0,
HAVE_FD_TRANSFER=1,
)
libraries = []
else: # Linux and other unices
macros = dict(
HAVE_SEM_OPEN=1,
HAVE_SEM_TIMEDWAIT=1,
HAVE_FD_TRANSFER=1,
)
libraries = ['rt']
if sys.platform == 'win32':
multiprocessing_srcs = [
'Modules/_billiard/multiprocessing.c',
'Modules/_billiard/semaphore.c',
'Modules/_billiard/pipe_connection.c',
'Modules/_billiard/socket_connection.c',
'Modules/_billiard/win32_functions.c',
]
else:
multiprocessing_srcs = [
'Modules/_billiard/multiprocessing.c',
'Modules/_billiard/socket_connection.c',
]
if macros.get('HAVE_SEM_OPEN', False):
multiprocessing_srcs.append('Modules/_billiard/semaphore.c')
long_description = open(os.path.join(HERE, 'README.rst')).read()
long_description += """
===========
Changes
===========
"""
long_description += open(os.path.join(HERE, 'CHANGES.txt')).read()
if not is_py3k:
long_description = long_description.encode('ascii', 'replace')
# -*- Installation Requires -*-
py_version = sys.version_info
is_jython = sys.platform.startswith('java')
is_pypy = hasattr(sys, 'pypy_version_info')
def strip_comments(l):
return l.split('#', 1)[0].strip()
def reqs(f):
return list(filter(None, [strip_comments(l) for l in open(
os.path.join(os.getcwd(), 'requirements', f)).readlines()]))
if py_version[0] == 3:
tests_require = reqs('test3.txt')
else:
tests_require = reqs('test.txt')
def _is_build_command(argv=sys.argv, cmds=('install', 'build', 'bdist')):
for arg in argv:
if arg.startswith(cmds):
return arg
def run_setup(with_extensions=True):
extensions = []
if with_extensions:
extensions = [
Extension(
'_billiard',
sources=multiprocessing_srcs,
define_macros=macros.items(),
libraries=libraries,
include_dirs=['Modules/_billiard'],
depends=glob.glob('Modules/_billiard/*.h') + ['setup.py'],
),
]
exclude = 'billiard.py2' if is_py3k else 'billiard.py3'
packages = find_packages(exclude=[
'ez_setup', 'tests', 'funtests.*', 'tests.*', exclude,
])
setup(
name='billiard',
version=meta['VERSION'],
description=meta['doc'],
long_description=long_description,
packages=packages,
ext_modules=extensions,
author=meta['author'],
author_email=meta['author_email'],
maintainer=meta['maintainer'],
maintainer_email=meta['contact'],
url=meta['homepage'],
zip_safe=False,
license='BSD',
tests_require=tests_require,
test_suite='nose.collector',
classifiers=[
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'Programming Language :: Python',
'Programming Language :: C',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.5',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: Jython',
'Programming Language :: Python :: Implementation :: PyPy',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
'License :: OSI Approved :: BSD License',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: System :: Distributed Computing',
],
**extras
)
try:
run_setup(not (is_jython or is_pypy or is_py3k))
except BaseException:
if _is_build_command(sys.argv):
import traceback
print(BUILD_WARNING % '\n'.join(traceback.format_stack()),
file=sys.stderr)
run_setup(False)
else:
raise