1.\" Copyright (c) 1993 2.\" The Regents of the University of California. All rights reserved. 3.\" 4.\" Redistribution and use in source and binary forms, with or without 5.\" modification, are permitted provided that the following conditions 6.\" are met: 7.\" 1. Redistributions of source code must retain the above copyright 8.\" notice, this list of conditions and the following disclaimer. 9.\" 2. Redistributions in binary form must reproduce the above copyright 10.\" notice, this list of conditions and the following disclaimer in the 11.\" documentation and/or other materials provided with the distribution. 12.\" 3. Neither the name of the University nor the names of its contributors 13.\" may be used to endorse or promote products derived from this software 14.\" without specific prior written permission. 15.\" 16.\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND 17.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 18.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 19.\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE 20.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 21.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 22.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 23.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 24.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 25.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 26.\" SUCH DAMAGE. 27.\" 28.\" @(#)mlock.2 8.2 (Berkeley) 12/11/93 29.\" $FreeBSD$ 30.\" 31.Dd March 20, 2018 32.Dt MLOCK 2 33.Os 34.Sh NAME 35.Nm mlock , 36.Nm munlock 37.Nd lock (unlock) physical pages in memory 38.Sh LIBRARY 39.Lb libc 40.Sh SYNOPSIS 41.In sys/mman.h 42.Ft int 43.Fn mlock "const void *addr" "size_t len" 44.Ft int 45.Fn munlock "const void *addr" "size_t len" 46.Sh DESCRIPTION 47The 48.Fn mlock 49system call 50locks into memory the physical pages associated with the virtual address 51range starting at 52.Fa addr 53for 54.Fa len 55bytes. 56The 57.Fn munlock 58system call unlocks pages previously locked by one or more 59.Fn mlock 60calls. 61For both, the 62.Fa addr 63argument should be aligned to a multiple of the page size. 64If the 65.Fa len 66argument is not a multiple of the page size, it will be rounded up 67to be so. 68The entire range must be allocated. 69.Pp 70After an 71.Fn mlock 72system call, the indicated pages will cause neither a non-resident page 73nor address-translation fault until they are unlocked. 74They may still cause protection-violation faults or TLB-miss faults on 75architectures with software-managed TLBs. 76The physical pages remain in memory until all locked mappings for the pages 77are removed. 78Multiple processes may have the same physical pages locked via their own 79virtual address mappings. 80A single process may likewise have pages multiply-locked via different virtual 81mappings of the same physical pages. 82Unlocking is performed explicitly by 83.Fn munlock 84or implicitly by a call to 85.Fn munmap 86which deallocates the unmapped address range. 87Locked mappings are not inherited by the child process after a 88.Xr fork 2 . 89.Pp 90Since physical memory is a potentially scarce resource, processes are 91limited in how much they can lock down. 92The amount of memory that a single process can 93.Fn mlock 94is limited by both the per-process 95.Dv RLIMIT_MEMLOCK 96resource limit and the 97system-wide 98.Dq wired pages 99limit 100.Va vm.max_wired . 101.Va vm.max_wired 102applies to the system as a whole, so the amount available to a single 103process at any given time is the difference between 104.Va vm.max_wired 105and 106.Va vm.stats.vm.v_wire_count . 107.Pp 108If 109.Va security.bsd.unprivileged_mlock 110is set to 0 these calls are only available to the super-user. 111.Sh RETURN VALUES 112.Rv -std 113.Pp 114If the call succeeds, all pages in the range become locked (unlocked); 115otherwise the locked status of all pages in the range remains unchanged. 116.Sh ERRORS 117The 118.Fn mlock 119system call 120will fail if: 121.Bl -tag -width Er 122.It Bq Er EPERM 123.Va security.bsd.unprivileged_mlock 124is set to 0 and the caller is not the super-user. 125.It Bq Er EINVAL 126The address range given wraps around zero. 127.It Bq Er EAGAIN 128Locking the indicated range would exceed the system limit for locked memory. 129.It Bq Er ENOMEM 130Some portion of the indicated address range is not allocated. 131There was an error faulting/mapping a page. 132Locking the indicated range would exceed the per-process limit for locked 133memory. 134.El 135The 136.Fn munlock 137system call 138will fail if: 139.Bl -tag -width Er 140.It Bq Er EPERM 141.Va security.bsd.unprivileged_mlock 142is set to 0 and the caller is not the super-user. 143.It Bq Er EINVAL 144The address range given wraps around zero. 145.It Bq Er ENOMEM 146Some or all of the address range specified by the addr and len 147arguments does not correspond to valid mapped pages in the address space 148of the process. 149.It Bq Er ENOMEM 150Locking the pages mapped by the specified range would exceed a limit on 151the amount of memory that the process may lock. 152.El 153.Sh "SEE ALSO" 154.Xr fork 2 , 155.Xr mincore 2 , 156.Xr minherit 2 , 157.Xr mlockall 2 , 158.Xr mmap 2 , 159.Xr munlockall 2 , 160.Xr munmap 2 , 161.Xr setrlimit 2 , 162.Xr getpagesize 3 163.Sh HISTORY 164The 165.Fn mlock 166and 167.Fn munlock 168system calls first appeared in 169.Bx 4.4 . 170.Sh BUGS 171Allocating too much wired memory can lead to a memory-allocation deadlock 172which requires a reboot to recover from. 173.Pp 174The per-process resource limit is a limit on the amount of virtual 175memory locked, while the system-wide limit is for the number of locked 176physical pages. 177Hence a process with two distinct locked mappings of the same physical page 178counts as 2 pages against the per-process limit and as only a single page 179in the system limit. 180.Pp 181The per-process resource limit is not currently supported. 182