HDK
|
Wrappers and utilities for multithreading. More...
#include <algorithm>
#include <atomic>
#include <chrono>
#include <functional>
#include <future>
#include <iostream>
#include <memory>
#include <mutex>
#include <thread>
#include <vector>
#include <OpenImageIO/atomic.h>
#include <OpenImageIO/dassert.h>
#include <OpenImageIO/export.h>
#include <OpenImageIO/oiioversion.h>
#include <OpenImageIO/platform.h>
Go to the source code of this file.
Classes | |
class | null_mutex |
class | null_lock< T > |
class | atomic_backoff |
class | spin_mutex |
class | spin_mutex::lock_guard |
class | spin_rw_mutex |
class | spin_rw_mutex::read_lock_guard |
class | spin_rw_mutex::write_lock_guard |
class | mutex_pool< Mutex, Key, Hash, Bins > |
class | thread_group |
class | thread_pool |
class | task_set |
Macros | |
#define | OIIO_THREAD_ALLOW_DCLP 1 |
Typedefs | |
typedef std::lock_guard< mutex > | lock_guard |
typedef std::lock_guard < recursive_mutex > | recursive_lock_guard |
typedef spin_mutex::lock_guard | spin_lock |
typedef spin_rw_mutex::read_lock_guard | spin_rw_read_lock |
typedef spin_rw_mutex::write_lock_guard | spin_rw_write_lock |
Functions | |
void | yield () noexcept |
void | pause (int delay) noexcept |
*result_t | my_func (int thread_id, Arg1 arg1,...) |
*pool | push (my_func, arg1,...) |
**But if you need a or simply need to know when the task has note that the | push () method will return a future< result_t > *that you can check |
* | for (int i=0;i< n_subtasks;++i)*tasks.push(pool-> push(myfunc)) |
*tasks | wait () |
**Note that the tasks the is the thread number *for the or if it s being executed by a non pool | thread (this *can happen in cases where the whole pool is occupied and the calling *thread contributes to running the work load).**Thread pool.Have fun |
OIIO_UTIL_API thread_pool * | default_thread_pool () |
Variables | |
**If you just want to fire and | forget |
**If you just want to fire and | then |
**If you just want to fire and | args |
**But if you need a | result |
**But if you need a or simply need to know when the task has * | completed |
**But if you need a or simply need to know when the task has note that the like | this |
**And then you can **find out if it s | done |
*get result *(waiting if necessary)*A common idiom is to fire a bunch of sub tasks at the | queue |
*get result *(waiting if necessary)*A common idiom is to fire a bunch of sub tasks at the and then *wait for them to all complete We provide a helper | class |
*get result *(waiting if necessary)*A common idiom is to fire a bunch of sub tasks at the and then *wait for them to all complete We provide a helper | task_set |
*get result *(waiting if necessary)*A common idiom is to fire a bunch of sub tasks at the and then *wait for them to all complete We provide a helper *to make this | easy |
**Note that the tasks the | thread_id |
**Note that the tasks the is the thread number *for the | pool |
Wrappers and utilities for multithreading.
Definition in file thread.h.
typedef std::lock_guard<mutex> lock_guard |
typedef std::lock_guard<recursive_mutex> recursive_lock_guard |
typedef spin_mutex::lock_guard spin_lock |
OIIO_UTIL_API thread_pool* default_thread_pool | ( | ) |
Return a reference to the "default" shared thread pool. In almost all ordinary circumstances, you should use this exclusively to get a single shared thread pool, since creating multiple thread pools could result in hilariously over-threading your application.
* for | ( | int | i = 0;i< n_subtasks;++i)*tasks.push(pool-> push(myfunc) | ) |
Definition at line 685 of file UT_RTreeImpl.h.
* result_t my_func | ( | int | thread_id, |
Arg1 | arg1, | ||
... | |||
) |
thread_pool is a persistent set of threads watching a queue to which tasks can be submitted.
Call default_thread_pool() to retrieve a pointer to a single shared thread_pool that will be initialized the first time it's needed, running a number of threads corresponding to the number of cores on the machine.
It's possible to create other pools, but it's not something that's recommended unless you really know what you're doing and are careful that the sum of threads across all pools doesn't cause you to be highly over-threaded. An example of when this might be useful is if you want one pool of 4 threads to handle I/O without interference from a separate pool of 4 (other) threads handling computation.
Submitting an asynchronous task to the queue follows the following pattern:
/* func that takes a thread ID followed possibly by more args
* * Note that the tasks the is the thread number* for the or if it s being executed by a non pool thread | ( | this *can happen in cases where the whole pool is occupied and the calling *thread contributes to running the work | load | ) |
**Note that the tasks wait | ( | ) |
|
inlinenoexcept |
* * If you just want to fire and args |