1 /*- 2 * BSD LICENSE 3 * 4 * Copyright(c) 2015 Intel Corporation. All rights reserved. 5 * All rights reserved. 6 * 7 * Redistribution and use in source and binary forms, with or without 8 * modification, are permitted provided that the following conditions 9 * are met: 10 * 11 * * Redistributions of source code must retain the above copyright 12 * notice, this list of conditions and the following disclaimer. 13 * * Redistributions in binary form must reproduce the above copyright 14 * notice, this list of conditions and the following disclaimer in 15 * the documentation and/or other materials provided with the 16 * distribution. 17 * * Neither the name of Intel Corporation nor the names of its 18 * contributors may be used to endorse or promote products derived 19 * from this software without specific prior written permission. 20 * 21 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 22 * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 23 * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 24 * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 25 * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 26 * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 27 * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 28 * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 29 * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 30 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 31 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 32 */ 33 34 /* 35 * Some portions of this software may have been derived from the 36 * https://github.com/halayli/lthread which carrys the following license. 37 * 38 * Copyright (C) 2012, Hasan Alayli <[email protected]> 39 * 40 * Redistribution and use in source and binary forms, with or without 41 * modification, are permitted provided that the following conditions 42 * are met: 43 * 1. Redistributions of source code must retain the above copyright 44 * notice, this list of conditions and the following disclaimer. 45 * 2. Redistributions in binary form must reproduce the above copyright 46 * notice, this list of conditions and the following disclaimer in the 47 * documentation and/or other materials provided with the distribution. 48 * 49 * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND 50 * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 51 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 52 * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE 53 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 54 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 55 * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 56 * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 57 * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 58 * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 59 * SUCH DAMAGE. 60 */ 61 62 /** 63 * @file lthread_api.h 64 * 65 * @warning 66 * @b EXPERIMENTAL: this API may change without prior notice 67 * 68 * This file contains the public API for the L-thread subsystem 69 * 70 * The L_thread subsystem provides a simple cooperative scheduler to 71 * enable arbitrary functions to run as cooperative threads within a 72 * single P-thread. 73 * 74 * The subsystem provides a P-thread like API that is intended to assist in 75 * reuse of legacy code written for POSIX p_threads. 76 * 77 * The L-thread subsystem relies on cooperative multitasking, as such 78 * an L-thread must possess frequent rescheduling points. Often these 79 * rescheduling points are provided transparently when the application 80 * invokes an L-thread API. 81 * 82 * In some applications it is possible that the program may enter a loop the 83 * exit condition for which depends on the action of another thread or a 84 * response from hardware. In such a case it is necessary to yield the thread 85 * periodically in the loop body, to allow other threads an opportunity to 86 * run. This can be done by inserting a call to lthread_yield() or 87 * lthread_sleep(n) in the body of the loop. 88 * 89 * If the application makes expensive / blocking system calls or does other 90 * work that would take an inordinate amount of time to complete, this will 91 * stall the cooperative scheduler resulting in very poor performance. 92 * 93 * In such cases an L-thread can be migrated temporarily to another scheduler 94 * running in a different P-thread on another core. When the expensive or 95 * blocking operation is completed it can be migrated back to the original 96 * scheduler. In this way other threads can continue to run on the original 97 * scheduler and will be completely unaffected by the blocking behaviour. 98 * To migrate an L-thread to another scheduler the API lthread_set_affinity() 99 * is provided. 100 * 101 * If L-threads that share data are running on the same core it is possible 102 * to design programs where mutual exclusion mechanisms to protect shared data 103 * can be avoided. This is due to the fact that the cooperative threads cannot 104 * preempt each other. 105 * 106 * There are two cases where mutual exclusion mechanisms are necessary. 107 * 108 * a) Where the L-threads sharing data are running on different cores. 109 * b) Where code must yield while updating data shared with another thread. 110 * 111 * The L-thread subsystem provides a set of mutex APIs to help with such 112 * scenarios, however excessive reliance on on these will impact performance 113 * and is best avoided if possible. 114 * 115 * L-threads can synchronise using a fast condition variable implementation 116 * that supports signal and broadcast. An L-thread running on any core can 117 * wait on a condition. 118 * 119 * L-threads can have L-thread local storage with an API modelled on either the 120 * P-thread get/set specific API or using PER_LTHREAD macros modelled on the 121 * RTE_PER_LCORE macros. Alternatively a simple user data pointer may be set 122 * and retrieved from a thread. 123 */ 124 #ifndef LTHREAD_H 125 #define LTHREAD_H 126 127 #ifdef __cplusplus 128 extern "C" { 129 #endif 130 131 #include <stdint.h> 132 #include <sys/socket.h> 133 #include <fcntl.h> 134 #include <netinet/in.h> 135 136 #include <rte_cycles.h> 137 138 139 struct lthread; 140 struct lthread_cond; 141 struct lthread_mutex; 142 143 struct lthread_condattr; 144 struct lthread_mutexattr; 145 146 typedef void (*lthread_func_t) (void *); 147 148 /* 149 * Define the size of stack for an lthread 150 * Then this is the size that will be allocated on lthread creation 151 * This is a fixed size and will not grow. 152 */ 153 #define LTHREAD_MAX_STACK_SIZE (1024*64) 154 155 /** 156 * Define the maximum number of TLS keys that can be created 157 * 158 */ 159 #define LTHREAD_MAX_KEYS 1024 160 161 /** 162 * Define the maximum number of attempts to destroy an lthread's 163 * TLS data on thread exit 164 */ 165 #define LTHREAD_DESTRUCTOR_ITERATIONS 4 166 167 168 /** 169 * Define the maximum number of lcores that will support lthreads 170 */ 171 #define LTHREAD_MAX_LCORES RTE_MAX_LCORE 172 173 /** 174 * How many lthread objects to pre-allocate as the system grows 175 * applies to lthreads + stacks, TLS, mutexs, cond vars. 176 * 177 * @see _lthread_alloc() 178 * @see _cond_alloc() 179 * @see _mutex_alloc() 180 * 181 */ 182 #define LTHREAD_PREALLOC 100 183 184 /** 185 * Set the number of schedulers in the system. 186 * 187 * This function may optionally be called before starting schedulers. 188 * 189 * If the number of schedulers is not set, or set to 0 then each scheduler 190 * will begin scheduling lthreads immediately it is started. 191 192 * If the number of schedulers is set to greater than 0, then each scheduler 193 * will wait until all schedulers have started before beginning to schedule 194 * lthreads. 195 * 196 * If an application wishes to have threads migrate between cores using 197 * lthread_set_affinity(), or join threads running on other cores using 198 * lthread_join(), then it is prudent to set the number of schedulers to ensure 199 * that all schedulers are initialised beforehand. 200 * 201 * @param num 202 * the number of schedulers in the system 203 * @return 204 * the number of schedulers in the system 205 */ 206 int lthread_num_schedulers_set(int num); 207 208 /** 209 * Return the number of schedulers currently running 210 * @return 211 * the number of schedulers in the system 212 */ 213 int lthread_active_schedulers(void); 214 215 /** 216 * Shutdown the specified scheduler 217 * 218 * This function tells the specified scheduler to 219 * exit if/when there is no more work to do. 220 * 221 * Note that although the scheduler will stop 222 * resources are not freed. 223 * 224 * @param lcore 225 * The lcore of the scheduler to shutdown 226 * 227 * @return 228 * none 229 */ 230 void lthread_scheduler_shutdown(unsigned lcore); 231 232 /** 233 * Shutdown all schedulers 234 * 235 * This function tells all schedulers including the current scheduler to 236 * exit if/when there is no more work to do. 237 * 238 * Note that although the schedulers will stop 239 * resources are not freed. 240 * 241 * @return 242 * none 243 */ 244 void lthread_scheduler_shutdown_all(void); 245 246 /** 247 * Run the lthread scheduler 248 * 249 * Runs the lthread scheduler. 250 * This function returns only if/when all lthreads have exited. 251 * This function must be the main loop of an EAL thread. 252 * 253 * @return 254 * none 255 */ 256 257 void lthread_run(void); 258 259 /** 260 * Create an lthread 261 * 262 * Creates an lthread and places it in the ready queue on a particular 263 * lcore. 264 * 265 * If no scheduler exists yet on the curret lcore then one is created. 266 * 267 * @param new_lt 268 * Pointer to an lthread pointer that will be initialized 269 * @param lcore 270 * the lcore the thread should be started on or the current clore 271 * -1 the current lcore 272 * 0 - LTHREAD_MAX_LCORES any other lcore 273 * @param lthread_func 274 * Pointer to the function the for the thread to run 275 * @param arg 276 * Pointer to args that will be passed to the thread 277 * 278 * @return 279 * 0 success 280 * EAGAIN no resources available 281 * EINVAL NULL thread or function pointer, or lcore_id out of range 282 */ 283 int 284 lthread_create(struct lthread **new_lt, 285 int lcore, lthread_func_t func, void *arg); 286 287 /** 288 * Cancel an lthread 289 * 290 * Cancels an lthread and causes it to be terminated 291 * If the lthread is detached it will be freed immediately 292 * otherwise its resources will not be released until it is joined. 293 * 294 * @param new_lt 295 * Pointer to an lthread that will be cancelled 296 * 297 * @return 298 * 0 success 299 * EINVAL thread was NULL 300 */ 301 int lthread_cancel(struct lthread *lt); 302 303 /** 304 * Join an lthread 305 * 306 * Joins the current thread with the specified lthread, and waits for that 307 * thread to exit. 308 * Passes an optional pointer to collect returned data. 309 * 310 * @param lt 311 * Pointer to the lthread to be joined 312 * @param ptr 313 * Pointer to pointer to collect returned data 314 * 315 0 * @return 316 * 0 success 317 * EINVAL lthread could not be joined. 318 */ 319 int lthread_join(struct lthread *lt, void **ptr); 320 321 /** 322 * Detach an lthread 323 * 324 * Detaches the current thread 325 * On exit a detached lthread will be freed immediately and will not wait 326 * to be joined. The default state for a thread is not detached. 327 * 328 * @return 329 * none 330 */ 331 void lthread_detach(void); 332 333 /** 334 * Exit an lthread 335 * 336 * Terminate the current thread, optionally return data. 337 * The data may be collected by lthread_join() 338 * 339 * After calling this function the lthread will be suspended until it is 340 * joined. After it is joined then its resources will be freed. 341 * 342 * @param ptr 343 * Pointer to pointer to data to be returned 344 * 345 * @return 346 * none 347 */ 348 void lthread_exit(void *val); 349 350 /** 351 * Cause the current lthread to sleep for n nanoseconds 352 * 353 * The current thread will be suspended until the specified time has elapsed 354 * or has been exceeded. 355 * 356 * Execution will switch to the next lthread that is ready to run 357 * 358 * @param nsecs 359 * Number of nanoseconds to sleep 360 * 361 * @return 362 * none 363 */ 364 void lthread_sleep(uint64_t nsecs); 365 366 /** 367 * Cause the current lthread to sleep for n cpu clock ticks 368 * 369 * The current thread will be suspended until the specified time has elapsed 370 * or has been exceeded. 371 * 372 * Execution will switch to the next lthread that is ready to run 373 * 374 * @param clks 375 * Number of clock ticks to sleep 376 * 377 * @return 378 * none 379 */ 380 void lthread_sleep_clks(uint64_t clks); 381 382 /** 383 * Yield the current lthread 384 * 385 * The current thread will yield and execution will switch to the 386 * next lthread that is ready to run 387 * 388 * @return 389 * none 390 */ 391 void lthread_yield(void); 392 393 /** 394 * Migrate the current thread to another scheduler 395 * 396 * This function migrates the current thread to another scheduler. 397 * Execution will switch to the next lthread that is ready to run on the 398 * current scheduler. The current thread will be resumed on the new scheduler. 399 * 400 * @param lcore 401 * The lcore to migrate to 402 * 403 * @return 404 * 0 success we are now running on the specified core 405 * EINVAL the destination lcore was not valid 406 */ 407 int lthread_set_affinity(unsigned lcore); 408 409 /** 410 * Return the current lthread 411 * 412 * Returns the current lthread 413 * 414 * @return 415 * pointer to the current lthread 416 */ 417 struct lthread 418 *lthread_current(void); 419 420 /** 421 * Associate user data with an lthread 422 * 423 * This function sets a user data pointer in the current lthread 424 * The pointer can be retrieved with lthread_get_data() 425 * It is the users responsibility to allocate and free any data referenced 426 * by the user pointer. 427 * 428 * @param data 429 * pointer to user data 430 * 431 * @return 432 * none 433 */ 434 void lthread_set_data(void *data); 435 436 /** 437 * Get user data for the current lthread 438 * 439 * This function returns a user data pointer for the current lthread 440 * The pointer must first be set with lthread_set_data() 441 * It is the users responsibility to allocate and free any data referenced 442 * by the user pointer. 443 * 444 * @return 445 * pointer to user data 446 */ 447 void 448 *lthread_get_data(void); 449 450 struct lthread_key; 451 typedef void (*tls_destructor_func) (void *); 452 453 /** 454 * Create a key for lthread TLS 455 * 456 * This function is modelled on pthread_key_create 457 * It creates a thread-specific data key visible to all lthreads on the 458 * current scheduler. 459 * 460 * Key values may be used to locate thread-specific data. 461 * The same key value may be used by different threads, the values bound 462 * to the key by lthread_setspecific() are maintained on a per-thread 463 * basis and persist for the life of the calling thread. 464 * 465 * An optional destructor function may be associated with each key value. 466 * At thread exit, if a key value has a non-NULL destructor pointer, and the 467 * thread has a non-NULL value associated with the key, the function pointed 468 * to is called with the current associated value as its sole argument. 469 * 470 * @param key 471 * Pointer to the key to be created 472 * @param destructor 473 * Pointer to destructor function 474 * 475 * @return 476 * 0 success 477 * EINVAL the key ptr was NULL 478 * EAGAIN no resources available 479 */ 480 int lthread_key_create(unsigned int *key, tls_destructor_func destructor); 481 482 /** 483 * Delete key for lthread TLS 484 * 485 * This function is modelled on pthread_key_delete(). 486 * It deletes a thread-specific data key previously returned by 487 * lthread_key_create(). 488 * The thread-specific data values associated with the key need not be NULL 489 * at the time that lthread_key_delete is called. 490 * It is the responsibility of the application to free any application 491 * storage or perform any cleanup actions for data structures related to the 492 * deleted key. This cleanup can be done either before or after 493 * lthread_key_delete is called. 494 * 495 * @param key 496 * The key to be deleted 497 * 498 * @return 499 * 0 Success 500 * EINVAL the key was invalid 501 */ 502 int lthread_key_delete(unsigned int key); 503 504 /** 505 * Get lthread TLS 506 * 507 * This function is modelled on pthread_get_specific(). 508 * It returns the value currently bound to the specified key on behalf of the 509 * calling thread. Calling lthread_getspecific() with a key value not 510 * obtained from lthread_key_create() or after key has been deleted with 511 * lthread_key_delete() will result in undefined behaviour. 512 * lthread_getspecific() may be called from a thread-specific data destructor 513 * function. 514 * 515 * @param key 516 * The key for which data is requested 517 * 518 * @return 519 * Pointer to the thread specific data associated with that key 520 * or NULL if no data has been set. 521 */ 522 void 523 *lthread_getspecific(unsigned int key); 524 525 /** 526 * Set lthread TLS 527 * 528 * This function is modelled on pthread_set_sepcific() 529 * It associates a thread-specific value with a key obtained via a previous 530 * call to lthread_key_create(). 531 * Different threads may bind different values to the same key. These values 532 * are typically pointers to dynamically allocated memory that have been 533 * reserved by the calling thread. Calling lthread_setspecific with a key 534 * value not obtained from lthread_key_create or after the key has been 535 * deleted with lthread_key_delete will result in undefined behaviour. 536 * 537 * @param key 538 * The key for which data is to be set 539 * @param key 540 * Pointer to the user data 541 * 542 * @return 543 * 0 success 544 * EINVAL the key was invalid 545 */ 546 547 int lthread_setspecific(unsigned int key, const void *value); 548 549 /** 550 * The macros below provide an alternative mechanism to access lthread local 551 * storage. 552 * 553 * The macros can be used to declare define and access per lthread local 554 * storage in a similar way to the RTE_PER_LCORE macros which control storage 555 * local to an lcore. 556 * 557 * Memory for per lthread variables declared in this way is allocated when the 558 * lthread is created and a pointer to this memory is stored in the lthread. 559 * The per lthread variables are accessed via the pointer + the offset of the 560 * particular variable. 561 * 562 * The total size of per lthread storage, and the variable offsets are found by 563 * defining the variables in a unique global memory section, the start and end 564 * of which is known. This global memory section is used only in the 565 * computation of the addresses of the lthread variables, and is never actually 566 * used to store any data. 567 * 568 * Due to the fact that variables declared this way may be scattered across 569 * many files, the start and end of the section and variable offsets are only 570 * known after linking, thus the computation of section size and variable 571 * addresses is performed at run time. 572 * 573 * These macros are primarily provided to aid porting of code that makes use 574 * of the existing RTE_PER_LCORE macros. In principle it would be more efficient 575 * to gather all lthread local variables into a single structure and 576 * set/retrieve a pointer to that struct using the alternative 577 * lthread_data_set/get APIs. 578 * 579 * These macros are mutually exclusive with the lthread_data_set/get APIs. 580 * If you define storage using these macros then the lthread_data_set/get APIs 581 * will not perform as expected, the lthread_data_set API does nothing, and the 582 * lthread_data_get API returns the start of global section. 583 * 584 */ 585 /* start and end of per lthread section */ 586 extern char __start_per_lt; 587 extern char __stop_per_lt; 588 589 590 #define RTE_DEFINE_PER_LTHREAD(type, name) \ 591 __typeof__(type)__attribute((section("per_lt"))) per_lt_##name 592 593 /** 594 * Macro to declare an extern per lthread variable "var" of type "type" 595 */ 596 #define RTE_DECLARE_PER_LTHREAD(type, name) \ 597 extern __typeof__(type)__attribute((section("per_lt"))) per_lt_##name 598 599 /** 600 * Read/write the per-lcore variable value 601 */ 602 #define RTE_PER_LTHREAD(name) ((typeof(per_lt_##name) *)\ 603 ((char *)lthread_get_data() +\ 604 ((char *) &per_lt_##name - &__start_per_lt))) 605 606 /** 607 * Initialize a mutex 608 * 609 * This function provides a mutual exclusion device, the need for which 610 * can normally be avoided in a cooperative multitasking environment. 611 * It is provided to aid porting of legacy code originally written for 612 * preemptive multitasking environments such as pthreads. 613 * 614 * A mutex may be unlocked (not owned by any thread), or locked (owned by 615 * one thread). 616 * 617 * A mutex can never be owned by more than one thread simultaneously. 618 * A thread attempting to lock a mutex that is already locked by another 619 * thread is suspended until the owning thread unlocks the mutex. 620 * 621 * lthread_mutex_init() initializes the mutex object pointed to by mutex 622 * Optional mutex attributes specified in mutexattr, are reserved for future 623 * use and are currently ignored. 624 * 625 * If a thread calls lthread_mutex_lock() on the mutex, then if the mutex 626 * is currently unlocked, it becomes locked and owned by the calling 627 * thread, and lthread_mutex_lock returns immediately. If the mutex is 628 * already locked by another thread, lthread_mutex_lock suspends the calling 629 * thread until the mutex is unlocked. 630 * 631 * lthread_mutex_trylock behaves identically to rte_thread_mutex_lock, except 632 * that it does not block the calling thread if the mutex is already locked 633 * by another thread. 634 * 635 * lthread_mutex_unlock() unlocks the specified mutex. The mutex is assumed 636 * to be locked and owned by the calling thread. 637 * 638 * lthread_mutex_destroy() destroys a mutex object, freeing its resources. 639 * The mutex must be unlocked with nothing blocked on it before calling 640 * lthread_mutex_destroy. 641 * 642 * @param name 643 * Optional pointer to string describing the mutex 644 * @param mutex 645 * Pointer to pointer to the mutex to be initialized 646 * @param attribute 647 * Pointer to attribute - unused reserved 648 * 649 * @return 650 * 0 success 651 * EINVAL mutex was not a valid pointer 652 * EAGAIN insufficient resources 653 */ 654 655 int 656 lthread_mutex_init(char *name, struct lthread_mutex **mutex, 657 const struct lthread_mutexattr *attr); 658 659 /** 660 * Destroy a mutex 661 * 662 * This function destroys the specified mutex freeing its resources. 663 * The mutex must be unlocked before calling lthread_mutex_destroy. 664 * 665 * @see lthread_mutex_init() 666 * 667 * @param mutex 668 * Pointer to pointer to the mutex to be initialized 669 * 670 * @return 671 * 0 success 672 * EINVAL mutex was not an initialized mutex 673 * EBUSY mutex was still in use 674 */ 675 int lthread_mutex_destroy(struct lthread_mutex *mutex); 676 677 /** 678 * Lock a mutex 679 * 680 * This function attempts to lock a mutex. 681 * If a thread calls lthread_mutex_lock() on the mutex, then if the mutex 682 * is currently unlocked, it becomes locked and owned by the calling 683 * thread, and lthread_mutex_lock returns immediately. If the mutex is 684 * already locked by another thread, lthread_mutex_lock suspends the calling 685 * thread until the mutex is unlocked. 686 * 687 * @see lthread_mutex_init() 688 * 689 * @param mutex 690 * Pointer to pointer to the mutex to be initialized 691 * 692 * @return 693 * 0 success 694 * EINVAL mutex was not an initialized mutex 695 * EDEADLOCK the mutex was already owned by the calling thread 696 */ 697 698 int lthread_mutex_lock(struct lthread_mutex *mutex); 699 700 /** 701 * Try to lock a mutex 702 * 703 * This function attempts to lock a mutex. 704 * lthread_mutex_trylock behaves identically to rte_thread_mutex_lock, except 705 * that it does not block the calling thread if the mutex is already locked 706 * by another thread. 707 * 708 * 709 * @see lthread_mutex_init() 710 * 711 * @param mutex 712 * Pointer to pointer to the mutex to be initialized 713 * 714 * @return 715 * 0 success 716 * EINVAL mutex was not an initialized mutex 717 * EBUSY the mutex was already locked by another thread 718 */ 719 int lthread_mutex_trylock(struct lthread_mutex *mutex); 720 721 /** 722 * Unlock a mutex 723 * 724 * This function attempts to unlock the specified mutex. The mutex is assumed 725 * to be locked and owned by the calling thread. 726 * 727 * The oldest of any threads blocked on the mutex is made ready and may 728 * compete with any other running thread to gain the mutex, it fails it will 729 * be blocked again. 730 * 731 * @param mutex 732 * Pointer to pointer to the mutex to be initialized 733 * 734 * @return 735 * 0 mutex was unlocked 736 * EINVAL mutex was not an initialized mutex 737 * EPERM the mutex was not owned by the calling thread 738 */ 739 740 int lthread_mutex_unlock(struct lthread_mutex *mutex); 741 742 /** 743 * Initialize a condition variable 744 * 745 * This function initializes a condition variable. 746 * 747 * Condition variables can be used to communicate changes in the state of data 748 * shared between threads. 749 * 750 * @see lthread_cond_wait() 751 * 752 * @param name 753 * Pointer to optional string describing the condition variable 754 * @param c 755 * Pointer to pointer to the condition variable to be initialized 756 * @param attr 757 * Pointer to optional attribute reserved for future use, currently ignored 758 * 759 * @return 760 * 0 success 761 * EINVAL cond was not a valid pointer 762 * EAGAIN insufficient resources 763 */ 764 int 765 lthread_cond_init(char *name, struct lthread_cond **c, 766 const struct lthread_condattr *attr); 767 768 /** 769 * Destroy a condition variable 770 * 771 * This function destroys a condition variable that was created with 772 * lthread_cond_init() and releases its resources. 773 * 774 * @param cond 775 * Pointer to pointer to the condition variable to be destroyed 776 * 777 * @return 778 * 0 Success 779 * EBUSY condition variable was still in use 780 * EINVAL was not an initialised condition variable 781 */ 782 int lthread_cond_destroy(struct lthread_cond *cond); 783 784 /** 785 * Wait on a condition variable 786 * 787 * The function blocks the current thread waiting on the condition variable 788 * specified by cond. The waiting thread unblocks only after another thread 789 * calls lthread_cond_signal, or lthread_cond_broadcast, specifying the 790 * same condition variable. 791 * 792 * @param cond 793 * Pointer to pointer to the condition variable to be waited on 794 * 795 * @param reserved 796 * reserved for future use 797 * 798 * @return 799 * 0 The condition was signalled ( Success ) 800 * EINVAL was not a an initialised condition variable 801 */ 802 int lthread_cond_wait(struct lthread_cond *c, uint64_t reserved); 803 804 /** 805 * Signal a condition variable 806 * 807 * The function unblocks one thread waiting for the condition variable cond. 808 * If no threads are waiting on cond, the rte_lthead_cond_signal() function 809 * has no effect. 810 * 811 * @param cond 812 * Pointer to pointer to the condition variable to be signalled 813 * 814 * @return 815 * 0 The condition was signalled ( Success ) 816 * EINVAL was not a an initialised condition variable 817 */ 818 int lthread_cond_signal(struct lthread_cond *c); 819 820 /** 821 * Broadcast a condition variable 822 * 823 * The function unblocks all threads waiting for the condition variable cond. 824 * If no threads are waiting on cond, the rte_lthead_cond_broadcast() 825 * function has no effect. 826 * 827 * @param cond 828 * Pointer to pointer to the condition variable to be signalled 829 * 830 * @return 831 * 0 The condition was signalled ( Success ) 832 * EINVAL was not a an initialised condition variable 833 */ 834 int lthread_cond_broadcast(struct lthread_cond *c); 835 836 #ifdef __cplusplus 837 } 838 #endif 839 840 #endif /* LTHREAD_H */ 841