| /oneTBB/doc/main/tbb_userguide/ |
| H A D | work_isolation.rst | 12 particular, when a parallel construct calls another parallel 18 the second (nested) parallel loop blocks execution of the first 25 // The first parallel loop. 27 // The second parallel loop. 33 parallel loop. As a result, two or more iterations of the outer loop 35 in oneTBB execution of functions constituting a parallel construct is 43 change its value after a nested parallel construct: 63 ways to *isolate* execution of a parallel construct, for its tasks to 96 When entered a task waiting call or a blocking parallel construct 111 is not changed unexpectedly during the call to a nested parallel [all …]
|
| H A D | Throughput_of_pipeline.rst | 10 running in parallel. Selecting the right value of ``N`` may involve some 14 sequential filter. This is true even for a pipeline with no parallel 18 parallel filters. 24 much more than 2. To really benefit from a pipeline, the parallel
|
| H A D | Floating_Point_Settings.rst | 30 You can then pass the task group context to most parallel algorithms, including ``flow::graph``, to… 31 It is possible to execute the parallel algorithms with different floating-point settings captured t… 33 …ng task scheduler initialization. It means, if a context is passed to a parallel algorithm, the fl… 36 In a nested call to a parallel algorithm that does not use the context of a task group with explici… 46 * A call to a oneTBB parallel algorithm does not change the floating-point settings of the calling …
|
| H A D | appendix_B.rst | 28 #pragma omp parallel 39 ``#pragma omp parallel`` causes the OpenMP to create a team of threads, 42 previously created thread team to execute the loop in parallel.
|
| H A D | Working_on_the_Assembly_Line_pipeline.rst | 7 *Pipelining* is a common parallel pattern that mimics a traditional 10 incoming stream of data, some of these filters can operate in parallel, 20 ``parallel_pipeline`` and filter to perform parallel formatting. The 52 done in parallel. That is, if you can serially read ``n`` chunks very 53 quickly, you can transform each of the ``n`` chunks in parallel, as long 56 to the middle filter, and thus be parallel. 59 To amortize parallel scheduling overheads, the filters operate on chunks 121 oneapi::tbb::filter_mode::parallel, MyTransformFunc() ) 131 order. In a parallel filter, multiple tokens can by processed in 132 parallel by the filter. If the number of tokens were unlimited, there [all …]
|
| H A D | Cook_Until_Done_parallel_do.rst | 13 advance. In parallel programming, it is usually better to use dynamic 16 can be safely processed in parallel, and processing each item takes at 34 get parallel speedup by converting the loop to use 52 The parallel form of ``SerialApplyFooToList`` is as follows:
|
| H A D | Parallelizing_Flow_Graph.rst | 86 two different values might be squared in parallel, or the same value 87 might be squared and cubed in parallel. Likewise in the second example, 88 the peanut butter might be spread on one slice of bread in parallel with 90 is legal to execute in parallel, but allows the runtime library to 91 choose at runtime what will be executed in parallel.
|
| H A D | estimate_flow_graph_performance.rst | 29 this path cannot be overlapped even in a parallel execution. 30 Therefore, even if all other paths are executed in parallel with C, 31 the wall clock time for the parallel execution is at least C, and the
|
| H A D | Bandwidth_and_Cache_Affinity_os.rst | 8 good speedup when written as parallel loops. The cause could be 12 parallel program as well as the serial program. 79 The next figure shows how parallel speedup might vary with the size of a 83 improvement at the extremes. For small N, parallel scheduling overhead
|
| H A D | Initializing_and_Terminating_the_Library.rst | 8 for example any parallel algorithm, flow graph or task group. 18 or called inside a task, a parallel algorithm, or a flow graph node, the method fails.
|
| H A D | Controlling_Chunking_os.rst | 62 Because of the impact of grainsize on parallel loops, it is worth 119 parallel performance if there is other parallelism available higher up 143 that with a grainsize of one, most of the overhead is parallel 145 proportional decrease in parallel overhead. Then the curve flattens out 146 because the parallel overhead becomes insignificant for a sufficiently
|
| /oneTBB/doc/GSG/ |
| H A D | intro.rst | 6 |full_name| is a runtime-based parallel programming model for C++ code that uses threads. 9 oneTBB enables you to simplify parallel programming by breaking computation into parallel running t… 21 * Specify logical parallel structure instead of threads. 22 * Emphasize data-parallel programming. 23 * Take advantage of concurrent collections and parallel algorithms.
|
| H A D | get_started.rst | 7 It is helpful for new users of parallel programming and experienced developers that want to improve… 9 …u to have a basic knowledge of C++ programming and some experience with parallel programming conce…
|
| /oneTBB/doc/main/intro/ |
| H A D | intro_os.rst | 7 |full_name| is a library that supports scalable parallel programming using 9 compilers. It is designed to promote scalable data parallel programming. 11 larger parallel components from smaller parallel components. To use the
|
| H A D | Benefits.rst | 15 There are a variety of approaches to parallel programming, ranging from 48 - **oneTBB emphasizes scalable, data parallel programming**. Breaking a 52 contrast, oneTBB emphasizes *data-parallel* programming, enabling 54 Data-parallel programming scales well to larger numbers of processors 55 by dividing the collection into smaller pieces. With data-parallel
|
| /oneTBB/examples/parallel_for/tachyon/ |
| H A D | README.md | 6 …parallel scheduling methods and their resulting speedup. The code was parallelized by speculating … 42 …e number of pixels (in the `X` or `Y` direction, for a rectangular sub-area) in each parallel task.
|
| /oneTBB/doc/main/tbb_userguide/design_patterns/ |
| H A D | Divide_and_Conquer.rst | 83 provides a simple way to parallelize it. The parallel code is shown 158 A parallel version is shown below. 174 // Recurse on each child y of x in parallel 192 recursive calls in parallel. 197 concurrently, the parallel walk uses variable ``LocalHits`` to 208 If parallel overhead is high, use the agglomeration pattern. For 215 simple algorithm is used here to focus on exposition of the parallel
|
| H A D | Agglomeration.rst | 13 Parallelism is so fine grained that overhead of parallel scheduling 26 elementwise addition of two arrays can be done fully in parallel, but 37 - Individual computations can be done in parallel, but are small. 57 parallel overhead. Too large a block size may limit parallelism or 132 sub-problems in parallel only if they are above a certain threshold
|
| H A D | Design_Patterns.rst | 7 This section provides some common parallel programming patterns and how 35 by Eun-Gyu and Marc Snir, and the Berkeley parallel patterns wiki. See
|
| /oneTBB/examples/parallel_for/seismic/ |
| H A D | main.cpp | 39 bool parallel; member 47 parallel(parallel_) {} in RunOptions() 76 SeismicVideo video(u, options.numberOfFrames, options.threads.last, options.parallel); in main()
|
| /oneTBB/examples/concurrent_priority_queue/shortpath/ |
| H A D | README.md | 4 … `N` nodes and some random number of connections between those nodes. A parallel algorithm based o… 6 …ited". This is because nodes are added and removed from the open-set in parallel, resulting in som… 8 …y queue is sorted) is not technically needed, so we could use this same parallel algorithm with ju…
|
| /oneTBB/test/tbb/ |
| H A D | test_openmp.cpp | 81 #pragma omp parallel num_threads(p) in OpenMP_TBB_Convolve() 117 #pragma omp parallel for reduction(+:sum) num_threads(p) in operator ()() 135 #pragma omp parallel in TestNumThreads()
|
| /oneTBB/examples/parallel_reduce/convex_hull/ |
| H A D | README.md | 11 - `convex_hull_sample` - builds parallel version of the example which uses `parallel_reduce`, `par… 12 - `convex_hull_bench` - build version of the example that compares serial and parallel buffered and…
|
| /oneTBB/doc/main/tbb_userguide/Migration_Guide/ |
| H A D | Task_Scheduler_Init.rst | 22 …returns the maximum number of threads available for the parallel algorithms within the current con… 34 limits the maximum concurrency of the parallel algorithm running inside ``task_arena`` 94 // Limit the number of threads to two for all oneTBB parallel interfaces 154 // Do some parallel work here
|
| /oneTBB/test/conformance/ |
| H A D | conformance_parallel_pipeline.cpp | 70 static const oneapi::tbb::filter_mode filter_table[] = { oneapi::tbb::filter_mode::parallel, 163 oneapi::tbb::filter_mode::parallel, in RootSequence() 308 oneapi::tbb::filter<short,int> filter3(oneapi::tbb::filter_mode::parallel, 336 oneapi::tbb::filter<void,int> filter1(oneapi::tbb::filter_mode::parallel, 350 oneapi::tbb::filter<int,int> filter2(oneapi::tbb::filter_mode::parallel,
|