xref: /freebsd-14.2/lib/libc/sys/mlock.2 (revision b2c76c41)
1.\" Copyright (c) 1993
2.\"	The Regents of the University of California.  All rights reserved.
3.\"
4.\" Redistribution and use in source and binary forms, with or without
5.\" modification, are permitted provided that the following conditions
6.\" are met:
7.\" 1. Redistributions of source code must retain the above copyright
8.\"    notice, this list of conditions and the following disclaimer.
9.\" 2. Redistributions in binary form must reproduce the above copyright
10.\"    notice, this list of conditions and the following disclaimer in the
11.\"    documentation and/or other materials provided with the distribution.
12.\" 3. Neither the name of the University nor the names of its contributors
13.\"    may be used to endorse or promote products derived from this software
14.\"    without specific prior written permission.
15.\"
16.\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
17.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
18.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
19.\" ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
20.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
21.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
22.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
23.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
24.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
25.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
26.\" SUCH DAMAGE.
27.\"
28.\"	@(#)mlock.2	8.2 (Berkeley) 12/11/93
29.\"
30.Dd May 13, 2019
31.Dt MLOCK 2
32.Os
33.Sh NAME
34.Nm mlock ,
35.Nm munlock
36.Nd lock (unlock) physical pages in memory
37.Sh LIBRARY
38.Lb libc
39.Sh SYNOPSIS
40.In sys/mman.h
41.Ft int
42.Fn mlock "const void *addr" "size_t len"
43.Ft int
44.Fn munlock "const void *addr" "size_t len"
45.Sh DESCRIPTION
46The
47.Fn mlock
48system call
49locks into memory the physical pages associated with the virtual address
50range starting at
51.Fa addr
52for
53.Fa len
54bytes.
55The
56.Fn munlock
57system call unlocks pages previously locked by one or more
58.Fn mlock
59calls.
60For both, the
61.Fa addr
62argument should be aligned to a multiple of the page size.
63If the
64.Fa len
65argument is not a multiple of the page size, it will be rounded up
66to be so.
67The entire range must be allocated.
68.Pp
69After an
70.Fn mlock
71system call, the indicated pages will cause neither a non-resident page
72nor address-translation fault until they are unlocked.
73They may still cause protection-violation faults or TLB-miss faults on
74architectures with software-managed TLBs.
75The physical pages remain in memory until all locked mappings for the pages
76are removed.
77Multiple processes may have the same physical pages locked via their own
78virtual address mappings.
79A single process may likewise have pages multiply-locked via different virtual
80mappings of the same physical pages.
81Unlocking is performed explicitly by
82.Fn munlock
83or implicitly by a call to
84.Fn munmap
85which deallocates the unmapped address range.
86Locked mappings are not inherited by the child process after a
87.Xr fork 2 .
88.Pp
89Since physical memory is a potentially scarce resource, processes are
90limited in how much they can lock down.
91The amount of memory that a single process can
92.Fn mlock
93is limited by both the per-process
94.Dv RLIMIT_MEMLOCK
95resource limit and the
96system-wide
97.Dq wired pages
98limit
99.Va vm.max_user_wired .
100.Va vm.max_user_wired
101applies to the system as a whole, so the amount available to a single
102process at any given time is the difference between
103.Va vm.max_user_wired
104and
105.Va vm.stats.vm.v_user_wire_count .
106.Pp
107If
108.Va security.bsd.unprivileged_mlock
109is set to 0 these calls are only available to the super-user.
110.Sh RETURN VALUES
111.Rv -std
112.Pp
113If the call succeeds, all pages in the range become locked (unlocked);
114otherwise the locked status of all pages in the range remains unchanged.
115.Sh ERRORS
116The
117.Fn mlock
118system call
119will fail if:
120.Bl -tag -width Er
121.It Bq Er EPERM
122.Va security.bsd.unprivileged_mlock
123is set to 0 and the caller is not the super-user.
124.It Bq Er EINVAL
125The address range given wraps around zero.
126.It Bq Er ENOMEM
127Some portion of the indicated address range is not allocated.
128There was an error faulting/mapping a page.
129Locking the indicated range would exceed the per-process or system-wide limits
130for locked memory.
131.El
132The
133.Fn munlock
134system call
135will fail if:
136.Bl -tag -width Er
137.It Bq Er EPERM
138.Va security.bsd.unprivileged_mlock
139is set to 0 and the caller is not the super-user.
140.It Bq Er EINVAL
141The address range given wraps around zero.
142.It Bq Er ENOMEM
143Some or all of the address range specified by the addr and len
144arguments does not correspond to valid mapped pages in the address space
145of the process.
146.It Bq Er ENOMEM
147Locking the pages mapped by the specified range would exceed a limit on
148the amount of memory that the process may lock.
149.El
150.Sh "SEE ALSO"
151.Xr fork 2 ,
152.Xr mincore 2 ,
153.Xr minherit 2 ,
154.Xr mlockall 2 ,
155.Xr mmap 2 ,
156.Xr munlockall 2 ,
157.Xr munmap 2 ,
158.Xr setrlimit 2 ,
159.Xr getpagesize 3
160.Sh HISTORY
161The
162.Fn mlock
163and
164.Fn munlock
165system calls first appeared in
166.Bx 4.4 .
167.Sh BUGS
168Allocating too much wired memory can lead to a memory-allocation deadlock
169which requires a reboot to recover from.
170.Pp
171The per-process and system-wide resource limits of locked memory apply
172to the amount of virtual memory locked, not the amount of locked physical
173pages.
174Hence two distinct locked mappings of the same physical page counts as
1752 pages aginst the system limit, and also against the per-process limit
176if both mappings belong to the same physical map.
177.Pp
178The per-process resource limit is not currently supported.
179