[FREELDR]: On ARM, don't turn on maximum, hyper, ultra-slow debugging and analysis...
[reactos.git] / reactos / boot / freeldr / freeldr / rtl / bget.c
1 /*
2
3 B G E T
4
5 Buffer allocator
6
7 Designed and implemented in April of 1972 by John Walker, based on the
8 Case Algol OPRO$ algorithm implemented in 1966.
9
10 Reimplemented in 1975 by John Walker for the Interdata 70.
11 Reimplemented in 1977 by John Walker for the Marinchip 9900.
12 Reimplemented in 1982 by Duff Kurland for the Intel 8080.
13
14 Portable C version implemented in September of 1990 by an older, wiser
15 instance of the original implementor.
16
17 Souped up and/or weighed down slightly shortly thereafter by Greg
18 Lutz.
19
20 AMIX edition, including the new compaction call-back option, prepared
21 by John Walker in July of 1992.
22
23 Bug in built-in test program fixed, ANSI compiler warnings eradicated,
24 buffer pool validator implemented, and guaranteed repeatable test
25 added by John Walker in October of 1995.
26
27 This program is in the public domain.
28
29 1. This is the book of the generations of Adam. In the day that God
30 created man, in the likeness of God made he him;
31 2. Male and female created he them; and blessed them, and called
32 their name Adam, in the day when they were created.
33 3. And Adam lived an hundred and thirty years, and begat a son in
34 his own likeness, and after his image; and called his name Seth:
35 4. And the days of Adam after he had begotten Seth were eight
36 hundred years: and he begat sons and daughters:
37 5. And all the days that Adam lived were nine hundred and thirty
38 years: and he died.
39 6. And Seth lived an hundred and five years, and begat Enos:
40 7. And Seth lived after he begat Enos eight hundred and seven years,
41 and begat sons and daughters:
42 8. And all the days of Seth were nine hundred and twelve years: and
43 he died.
44 9. And Enos lived ninety years, and begat Cainan:
45 10. And Enos lived after he begat Cainan eight hundred and fifteen
46 years, and begat sons and daughters:
47 11. And all the days of Enos were nine hundred and five years: and
48 he died.
49 12. And Cainan lived seventy years and begat Mahalaleel:
50 13. And Cainan lived after he begat Mahalaleel eight hundred and
51 forty years, and begat sons and daughters:
52 14. And all the days of Cainan were nine hundred and ten years: and
53 he died.
54 15. And Mahalaleel lived sixty and five years, and begat Jared:
55 16. And Mahalaleel lived after he begat Jared eight hundred and
56 thirty years, and begat sons and daughters:
57 17. And all the days of Mahalaleel were eight hundred ninety and
58 five years: and he died.
59 18. And Jared lived an hundred sixty and two years, and he begat
60 Enoch:
61 19. And Jared lived after he begat Enoch eight hundred years, and
62 begat sons and daughters:
63 20. And all the days of Jared were nine hundred sixty and two years:
64 and he died.
65 21. And Enoch lived sixty and five years, and begat Methuselah:
66 22. And Enoch walked with God after he begat Methuselah three
67 hundred years, and begat sons and daughters:
68 23. And all the days of Enoch were three hundred sixty and five
69 years:
70 24. And Enoch walked with God: and he was not; for God took him.
71 25. And Methuselah lived an hundred eighty and seven years, and
72 begat Lamech.
73 26. And Methuselah lived after he begat Lamech seven hundred eighty
74 and two years, and begat sons and daughters:
75 27. And all the days of Methuselah were nine hundred sixty and nine
76 years: and he died.
77 28. And Lamech lived an hundred eighty and two years, and begat a
78 son:
79 29. And he called his name Noah, saying, This same shall comfort us
80 concerning our work and toil of our hands, because of the ground
81 which the LORD hath cursed.
82 30. And Lamech lived after he begat Noah five hundred ninety and
83 five years, and begat sons and daughters:
84 31. And all the days of Lamech were seven hundred seventy and seven
85 years: and he died.
86 32. And Noah was five hundred years old: and Noah begat Shem, Ham,
87 and Japheth.
88
89 And buffers begat buffers, and links begat links, and buffer pools
90 begat links to chains of buffer pools containing buffers, and lo the
91 buffers and links and pools of buffers and pools of links to chains of
92 pools of buffers were fruitful and they multiplied and the Operating
93 System looked down upon them and said that it was Good.
94
95
96 INTRODUCTION
97 ============
98
99 BGET is a comprehensive memory allocation package which is easily
100 configured to the needs of an application. BGET is efficient in
101 both the time needed to allocate and release buffers and in the
102 memory overhead required for buffer pool management. It
103 automatically consolidates contiguous space to minimise
104 fragmentation. BGET is configured by compile-time definitions,
105 Major options include:
106
107 * A built-in test program to exercise BGET and
108 demonstrate how the various functions are used.
109
110 * Allocation by either the "first fit" or "best fit"
111 method.
112
113 * Wiping buffers at release time to catch code which
114 references previously released storage.
115
116 * Built-in routines to dump individual buffers or the
117 entire buffer pool.
118
119 * Retrieval of allocation and pool size statistics.
120
121 * Quantisation of buffer sizes to a power of two to
122 satisfy hardware alignment constraints.
123
124 * Automatic pool compaction, growth, and shrinkage by
125 means of call-backs to user defined functions.
126
127 Applications of BGET can range from storage management in
128 ROM-based embedded programs to providing the framework upon which
129 a multitasking system incorporating garbage collection is
130 constructed. BGET incorporates extensive internal consistency
131 checking using the <assert.h> mechanism; all these checks can be
132 turned off by compiling with NDEBUG defined, yielding a version of
133 BGET with minimal size and maximum speed.
134
135 The basic algorithm underlying BGET has withstood the test of
136 time; more than 25 years have passed since the first
137 implementation of this code. And yet, it is substantially more
138 efficient than the native allocation schemes of many operating
139 systems: the Macintosh and Microsoft Windows to name two, on which
140 programs have obtained substantial speed-ups by layering BGET as
141 an application level memory manager atop the underlying system's.
142
143 BGET has been implemented on the largest mainframes and the lowest
144 of microprocessors. It has served as the core for multitasking
145 operating systems, multi-thread applications, embedded software in
146 data network switching processors, and a host of C programs. And
147 while it has accreted flexibility and additional options over the
148 years, it remains fast, memory efficient, portable, and easy to
149 integrate into your program.
150
151
152 BGET IMPLEMENTATION ASSUMPTIONS
153 ===============================
154
155 BGET is written in as portable a dialect of C as possible. The
156 only fundamental assumption about the underlying hardware
157 architecture is that memory is allocated is a linear array which
158 can be addressed as a vector of C "char" objects. On segmented
159 address space architectures, this generally means that BGET should
160 be used to allocate storage within a single segment (although some
161 compilers simulate linear address spaces on segmented
162 architectures). On segmented architectures, then, BGET buffer
163 pools may not be larger than a segment, but since BGET allows any
164 number of separate buffer pools, there is no limit on the total
165 storage which can be managed, only on the largest individual
166 object which can be allocated. Machines with a linear address
167 architecture, such as the VAX, 680x0, Sparc, MIPS, or the Intel
168 80386 and above in native mode, may use BGET without restriction.
169
170
171 GETTING STARTED WITH BGET
172 =========================
173
174 Although BGET can be configured in a multitude of fashions, there
175 are three basic ways of working with BGET. The functions
176 mentioned below are documented in the following section. Please
177 excuse the forward references which are made in the interest of
178 providing a roadmap to guide you to the BGET functions you're
179 likely to need.
180
181 Embedded Applications
182 ---------------------
183
184 Embedded applications typically have a fixed area of memory
185 dedicated to buffer allocation (often in a separate RAM address
186 space distinct from the ROM that contains the executable code).
187 To use BGET in such an environment, simply call bpool() with the
188 start address and length of the buffer pool area in RAM, then
189 allocate buffers with bget() and release them with brel().
190 Embedded applications with very limited RAM but abundant CPU speed
191 may benefit by configuring BGET for BestFit allocation (which is
192 usually not worth it in other environments).
193
194 Malloc() Emulation
195 ------------------
196
197 If the C library malloc() function is too slow, not present in
198 your development environment (for example, an a native Windows or
199 Macintosh program), or otherwise unsuitable, you can replace it
200 with BGET. Initially define a buffer pool of an appropriate size
201 with bpool()--usually obtained by making a call to the operating
202 system's low-level memory allocator. Then allocate buffers with
203 bget(), bgetz(), and bgetr() (the last two permit the allocation
204 of buffers initialised to zero and [inefficient] re-allocation of
205 existing buffers for compatibility with C library functions).
206 Release buffers by calling brel(). If a buffer allocation request
207 fails, obtain more storage from the underlying operating system,
208 add it to the buffer pool by another call to bpool(), and continue
209 execution.
210
211 Automatic Storage Management
212 ----------------------------
213
214 You can use BGET as your application's native memory manager and
215 implement automatic storage pool expansion, contraction, and
216 optionally application-specific memory compaction by compiling
217 BGET with the BECtl variable defined, then calling bectl() and
218 supplying functions for storage compaction, acquisition, and
219 release, as well as a standard pool expansion increment. All of
220 these functions are optional (although it doesn't make much sense
221 to provide a release function without an acquisition function,
222 does it?). Once the call-back functions have been defined with
223 bectl(), you simply use bget() and brel() to allocate and release
224 storage as before. You can supply an initial buffer pool with
225 bpool() or rely on automatic allocation to acquire the entire
226 pool. When a call on bget() cannot be satisfied, BGET first
227 checks if a compaction function has been supplied. If so, it is
228 called (with the space required to satisfy the allocation request
229 and a sequence number to allow the compaction routine to be called
230 successively without looping). If the compaction function is able
231 to free any storage (it needn't know whether the storage it freed
232 was adequate) it should return a nonzero value, whereupon BGET
233 will retry the allocation request and, if it fails again, call the
234 compaction function again with the next-higher sequence number.
235
236 If the compaction function returns zero, indicating failure to
237 free space, or no compaction function is defined, BGET next tests
238 whether a non-NULL allocation function was supplied to bectl().
239 If so, that function is called with an argument indicating how
240 many bytes of additional space are required. This will be the
241 standard pool expansion increment supplied in the call to bectl()
242 unless the original bget() call requested a buffer larger than
243 this; buffers larger than the standard pool block can be managed
244 "off the books" by BGET in this mode. If the allocation function
245 succeeds in obtaining the storage, it returns a pointer to the new
246 block and BGET expands the buffer pool; if it fails, the
247 allocation request fails and returns NULL to the caller. If a
248 non-NULL release function is supplied, expansion blocks which
249 become totally empty are released to the global free pool by
250 passing their addresses to the release function.
251
252 Equipped with appropriate allocation, release, and compaction
253 functions, BGET can be used as part of very sophisticated memory
254 management strategies, including garbage collection. (Note,
255 however, that BGET is *not* a garbage collector by itself, and
256 that developing such a system requires much additional logic and
257 careful design of the application's memory allocation strategy.)
258
259
260 BGET FUNCTION DESCRIPTIONS
261 ==========================
262
263 Functions implemented in this file (some are enabled by certain of
264 the optional settings below):
265
266 void bpool(void *buffer, bufsize len);
267
268 Create a buffer pool of <len> bytes, using the storage starting at
269 <buffer>. You can call bpool() subsequently to contribute
270 additional storage to the overall buffer pool.
271
272 void *bget(bufsize size);
273
274 Allocate a buffer of <size> bytes. The address of the buffer is
275 returned, or NULL if insufficient memory was available to allocate
276 the buffer.
277
278 void *bgetz(bufsize size);
279
280 Allocate a buffer of <size> bytes and clear it to all zeroes. The
281 address of the buffer is returned, or NULL if insufficient memory
282 was available to allocate the buffer.
283
284 void *bgetr(void *buffer, bufsize newsize);
285
286 Reallocate a buffer previously allocated by bget(), changing its
287 size to <newsize> and preserving all existing data. NULL is
288 returned if insufficient memory is available to reallocate the
289 buffer, in which case the original buffer remains intact.
290
291 void brel(void *buf);
292
293 Return the buffer <buf>, previously allocated by bget(), to the
294 free space pool.
295
296 void bectl(int (*compact)(bufsize sizereq, int sequence),
297 void *(*acquire)(bufsize size),
298 void (*release)(void *buf),
299 bufsize pool_incr);
300
301 Expansion control: specify functions through which the package may
302 compact storage (or take other appropriate action) when an
303 allocation request fails, and optionally automatically acquire
304 storage for expansion blocks when necessary, and release such
305 blocks when they become empty. If <compact> is non-NULL, whenever
306 a buffer allocation request fails, the <compact> function will be
307 called with arguments specifying the number of bytes (total buffer
308 size, including header overhead) required to satisfy the
309 allocation request, and a sequence number indicating the number of
310 consecutive calls on <compact> attempting to satisfy this
311 allocation request. The sequence number is 1 for the first call
312 on <compact> for a given allocation request, and increments on
313 subsequent calls, permitting the <compact> function to take
314 increasingly dire measures in an attempt to free up storage. If
315 the <compact> function returns a nonzero value, the allocation
316 attempt is re-tried. If <compact> returns 0 (as it must if it
317 isn't able to release any space or add storage to the buffer
318 pool), the allocation request fails, which can trigger automatic
319 pool expansion if the <acquire> argument is non-NULL. At the time
320 the <compact> function is called, the state of the buffer
321 allocator is identical to that at the moment the allocation
322 request was made; consequently, the <compact> function may call
323 brel(), bpool(), bstats(), and/or directly manipulate the buffer
324 pool in any manner which would be valid were the application in
325 control. This does not, however, relieve the <compact> function
326 of the need to ensure that whatever actions it takes do not change
327 things underneath the application that made the allocation
328 request. For example, a <compact> function that released a buffer
329 in the process of being reallocated with bgetr() would lead to
330 disaster. Implementing a safe and effective <compact> mechanism
331 requires careful design of an application's memory architecture,
332 and cannot generally be easily retrofitted into existing code.
333
334 If <acquire> is non-NULL, that function will be called whenever an
335 allocation request fails. If the <acquire> function succeeds in
336 allocating the requested space and returns a pointer to the new
337 area, allocation will proceed using the expanded buffer pool. If
338 <acquire> cannot obtain the requested space, it should return NULL
339 and the entire allocation process will fail. <pool_incr>
340 specifies the normal expansion block size. Providing an <acquire>
341 function will cause subsequent bget() requests for buffers too
342 large to be managed in the linked-block scheme (in other words,
343 larger than <pool_incr> minus the buffer overhead) to be satisfied
344 directly by calls to the <acquire> function. Automatic release of
345 empty pool blocks will occur only if all pool blocks in the system
346 are the size given by <pool_incr>.
347
348 void bstats(bufsize *curalloc, bufsize *totfree,
349 bufsize *maxfree, long *nget, long *nrel);
350
351 The amount of space currently allocated is stored into the
352 variable pointed to by <curalloc>. The total free space (sum of
353 all free blocks in the pool) is stored into the variable pointed
354 to by <totfree>, and the size of the largest single block in the
355 free space pool is stored into the variable pointed to by
356 <maxfree>. The variables pointed to by <nget> and <nrel> are
357 filled, respectively, with the number of successful (non-NULL
358 return) bget() calls and the number of brel() calls.
359
360 void bstatse(bufsize *pool_incr, long *npool,
361 long *npget, long *nprel,
362 long *ndget, long *ndrel);
363
364 Extended statistics: The expansion block size will be stored into
365 the variable pointed to by <pool_incr>, or the negative thereof if
366 automatic expansion block releases are disabled. The number of
367 currently active pool blocks will be stored into the variable
368 pointed to by <npool>. The variables pointed to by <npget> and
369 <nprel> will be filled with, respectively, the number of expansion
370 block acquisitions and releases which have occurred. The
371 variables pointed to by <ndget> and <ndrel> will be filled with
372 the number of bget() and brel() calls, respectively, managed
373 through blocks directly allocated by the acquisition and release
374 functions.
375
376 void bufdump(void *buf);
377
378 The buffer pointed to by <buf> is dumped on standard output.
379
380 void bpoold(void *pool, int dumpalloc, int dumpfree);
381
382 All buffers in the buffer pool <pool>, previously initialised by a
383 call on bpool(), are listed in ascending memory address order. If
384 <dumpalloc> is nonzero, the contents of allocated buffers are
385 dumped; if <dumpfree> is nonzero, the contents of free blocks are
386 dumped.
387
388 int bpoolv(void *pool);
389
390 The named buffer pool, previously initialised by a call on
391 bpool(), is validated for bad pointers, overwritten data, etc. If
392 compiled with NDEBUG not defined, any error generates an assertion
393 failure. Otherwise 1 is returned if the pool is valid, 0 if an
394 error is found.
395
396
397 BGET CONFIGURATION
398 ==================
399 */
400
401 /*#define TestProg 20000*/ /* Generate built-in test program
402 if defined. The value specifies
403 how many buffer allocation attempts
404 the test program should make. */
405
406 #define SizeQuant 4 /* Buffer allocation size quantum:
407 all buffers allocated are a
408 multiple of this size. This
409 MUST be a power of two. */
410 #ifndef _M_ARM
411
412 #define BufDump 1 /* Define this symbol to enable the
413 bpoold() function which dumps the
414 buffers in a buffer pool. */
415
416 #define BufValid 1 /* Define this symbol to enable the
417 bpoolv() function for validating
418 a buffer pool. */
419
420 #define DumpData 1 /* Define this symbol to enable the
421 bufdump() function which allows
422 dumping the contents of an allocated
423 or free buffer. */
424
425 #define BufStats 1 /* Define this symbol to enable the
426 bstats() function which calculates
427 the total free space in the buffer
428 pool, the largest available
429 buffer, and the total space
430 currently allocated. */
431
432 #define FreeWipe 1 /* Wipe free buffers to a guaranteed
433 pattern of garbage to trip up
434 miscreants who attempt to use
435 pointers into released buffers. */
436
437 #define BestFit 1 /* Use a best fit algorithm when
438 searching for space for an
439 allocation request. This uses
440 memory more efficiently, but
441 allocation will be much slower. */
442
443 #define BECtl 1 /* Define this symbol to enable the
444 bectl() function for automatic
445 pool space control. */
446 #else
447 #endif
448
449 #include <stdio.h>
450
451 int TuiPrintf(const char *format, ... );
452 #define printf TuiPrintf
453
454 #ifdef lint
455 #define NDEBUG /* Exits in asserts confuse lint */
456 /* LINTLIBRARY */ /* Don't complain about def, no ref */
457 extern char *sprintf(); /* Sun includes don't define sprintf */
458 #endif
459
460 #define NDEBUG
461
462 #include <assert.h>
463 #include <memory.h>
464
465 #ifdef BufDump /* BufDump implies DumpData */
466 #ifndef DumpData
467 #define DumpData 1
468 #endif
469 #endif
470
471 #ifdef DumpData
472 #include <ctype.h>
473 #endif
474
475 /* Declare the interface, including the requested buffer size type,
476 bufsize. */
477
478 #include "bget.h"
479
480 #define MemSize int /* Type for size arguments to memxxx()
481 functions such as memcmp(). */
482
483 /* Queue links */
484
485 struct qlinks {
486 struct bfhead *flink; /* Forward link */
487 struct bfhead *blink; /* Backward link */
488 };
489
490 /* Header in allocated and free buffers */
491
492 struct bhead {
493 bufsize prevfree; /* Relative link back to previous
494 free buffer in memory or 0 if
495 previous buffer is allocated. */
496 bufsize bsize; /* Buffer size: positive if free,
497 negative if allocated. */
498 };
499 #define BH(p) ((struct bhead *) (p))
500
501 /* Header in directly allocated buffers (by acqfcn) */
502
503 struct bdhead {
504 bufsize tsize; /* Total size, including overhead */
505 struct bhead bh; /* Common header */
506 };
507 #define BDH(p) ((struct bdhead *) (p))
508
509 /* Header in free buffers */
510
511 struct bfhead {
512 struct bhead bh; /* Common allocated/free header */
513 struct qlinks ql; /* Links on free list */
514 };
515 #define BFH(p) ((struct bfhead *) (p))
516
517 static struct bfhead freelist = { /* List of free buffers */
518 {0, 0},
519 {&freelist, &freelist}
520 };
521
522
523 #ifdef BufStats
524 static bufsize totalloc = 0; /* Total space currently allocated */
525 static long numget = 0, numrel = 0; /* Number of bget() and brel() calls */
526 #ifdef BECtl
527 static long numpblk = 0; /* Number of pool blocks */
528 static long numpget = 0, numprel = 0; /* Number of block gets and rels */
529 static long numdget = 0, numdrel = 0; /* Number of direct gets and rels */
530 #endif /* BECtl */
531 #endif /* BufStats */
532
533 #ifdef BECtl
534
535 /* Automatic expansion block management functions */
536
537 static int (*compfcn) _((bufsize sizereq, int sequence)) = NULL;
538 static void *(*acqfcn) _((bufsize size)) = NULL;
539 static void (*relfcn) _((void *buf)) = NULL;
540
541 static bufsize exp_incr = 0; /* Expansion block size */
542 static bufsize pool_len = 0; /* 0: no bpool calls have been made
543 -1: not all pool blocks are
544 the same size
545 >0: (common) block size for all
546 bpool calls made so far
547 */
548 #endif
549
550 /* Minimum allocation quantum: */
551
552 #define QLSize (sizeof(struct qlinks))
553 #define SizeQ ((SizeQuant > QLSize) ? SizeQuant : QLSize)
554
555 #define V (void) /* To denote unwanted returned values */
556
557 /* End sentinel: value placed in bsize field of dummy block delimiting
558 end of pool block. The most negative number which will fit in a
559 bufsize, defined in a way that the compiler will accept. */
560
561 #define ESent ((bufsize) (-(((1L << (sizeof(bufsize) * 8 - 2)) - 1) * 2) - 2))
562
563 /* BGET -- Allocate a buffer. */
564
565 void *bget(requested_size)
566 bufsize requested_size;
567 {
568 bufsize size = requested_size;
569 struct bfhead *b;
570 #ifdef BestFit
571 struct bfhead *best;
572 #endif
573 void *buf;
574 #ifdef BECtl
575 int compactseq = 0;
576 #endif
577
578 assert(size > 0);
579
580 if (size < SizeQ) { /* Need at least room for the */
581 size = SizeQ; /* queue links. */
582 }
583 #ifdef SizeQuant
584 #if SizeQuant > 1
585 size = (size + (SizeQuant - 1)) & (~(SizeQuant - 1));
586 #endif
587 #endif
588
589 size += sizeof(struct bhead); /* Add overhead in allocated buffer
590 to size required. */
591
592 #ifdef BECtl
593 /* If a compact function was provided in the call to bectl(), wrap
594 a loop around the allocation process to allow compaction to
595 intervene in case we don't find a suitable buffer in the chain. */
596
597 while (1) {
598 #endif
599 b = freelist.ql.flink;
600 #ifdef BestFit
601 best = &freelist;
602 #endif
603
604
605 /* Scan the free list searching for the first buffer big enough
606 to hold the requested size buffer. */
607
608 #ifdef BestFit
609 while (b != &freelist) {
610 if (b->bh.bsize >= size) {
611 if ((best == &freelist) || (b->bh.bsize < best->bh.bsize)) {
612 best = b;
613 }
614 }
615 b = b->ql.flink; /* Link to next buffer */
616 }
617 b = best;
618 #endif /* BestFit */
619
620 while (b != &freelist) {
621 if ((bufsize) b->bh.bsize >= size) {
622
623 /* Buffer is big enough to satisfy the request. Allocate it
624 to the caller. We must decide whether the buffer is large
625 enough to split into the part given to the caller and a
626 free buffer that remains on the free list, or whether the
627 entire buffer should be removed from the free list and
628 given to the caller in its entirety. We only split the
629 buffer if enough room remains for a header plus the minimum
630 quantum of allocation. */
631
632 if ((b->bh.bsize - size) > (SizeQ + (sizeof(struct bhead)))) {
633 struct bhead *ba, *bn;
634
635 ba = BH(((char *) b) + (b->bh.bsize - size));
636 bn = BH(((char *) ba) + size);
637 assert(bn->prevfree == b->bh.bsize);
638 /* Subtract size from length of free block. */
639 b->bh.bsize -= size;
640 /* Link allocated buffer to the previous free buffer. */
641 ba->prevfree = b->bh.bsize;
642 /* Plug negative size into user buffer. */
643 ba->bsize = -(bufsize) size;
644 /* Mark buffer after this one not preceded by free block. */
645 bn->prevfree = 0;
646
647 #ifdef BufStats
648 totalloc += size;
649 numget++; /* Increment number of bget() calls */
650 #endif
651 buf = (void *) ((((char *) ba) + sizeof(struct bhead)));
652 return buf;
653 } else {
654 struct bhead *ba;
655
656 ba = BH(((char *) b) + b->bh.bsize);
657 assert(ba->prevfree == b->bh.bsize);
658
659 /* The buffer isn't big enough to split. Give the whole
660 shebang to the caller and remove it from the free list. */
661
662 assert(b->ql.blink->ql.flink == b);
663 assert(b->ql.flink->ql.blink == b);
664 b->ql.blink->ql.flink = b->ql.flink;
665 b->ql.flink->ql.blink = b->ql.blink;
666
667 #ifdef BufStats
668 totalloc += b->bh.bsize;
669 numget++; /* Increment number of bget() calls */
670 #endif
671 /* Negate size to mark buffer allocated. */
672 b->bh.bsize = -(b->bh.bsize);
673
674 /* Zero the back pointer in the next buffer in memory
675 to indicate that this buffer is allocated. */
676 ba->prevfree = 0;
677
678 /* Give user buffer starting at queue links. */
679 buf = (void *) &(b->ql);
680 return buf;
681 }
682 }
683 b = b->ql.flink; /* Link to next buffer */
684 }
685 #ifdef BECtl
686
687 /* We failed to find a buffer. If there's a compact function
688 defined, notify it of the size requested. If it returns
689 TRUE, try the allocation again. */
690
691 if ((compfcn == NULL) || (!(*compfcn)(size, ++compactseq))) {
692 break;
693 }
694 }
695
696 /* No buffer available with requested size free. */
697
698 /* Don't give up yet -- look in the reserve supply. */
699
700 if (acqfcn != NULL) {
701 if (size > exp_incr - sizeof(struct bhead)) {
702
703 /* Request is too large to fit in a single expansion
704 block. Try to satisy it by a direct buffer acquisition. */
705
706 struct bdhead *bdh;
707
708 size += sizeof(struct bdhead) - sizeof(struct bhead);
709 if ((bdh = BDH((*acqfcn)((bufsize) size))) != NULL) {
710
711 /* Mark the buffer special by setting the size field
712 of its header to zero. */
713 bdh->bh.bsize = 0;
714 bdh->bh.prevfree = 0;
715 bdh->tsize = size;
716 #ifdef BufStats
717 totalloc += size;
718 numget++; /* Increment number of bget() calls */
719 numdget++; /* Direct bget() call count */
720 #endif
721 buf = (void *) (bdh + 1);
722 return buf;
723 }
724
725 } else {
726
727 /* Try to obtain a new expansion block */
728
729 void *newpool;
730
731 if ((newpool = (*acqfcn)((bufsize) exp_incr)) != NULL) {
732 bpool(newpool, exp_incr);
733 buf = bget(requested_size); /* This can't, I say, can't
734 get into a loop. */
735 return buf;
736 }
737 }
738 }
739
740 /* Still no buffer available */
741
742 #endif /* BECtl */
743
744 return NULL;
745 }
746
747 /* BGETZ -- Allocate a buffer and clear its contents to zero. We clear
748 the entire contents of the buffer to zero, not just the
749 region requested by the caller. */
750
751 void *bgetz(size)
752 bufsize size;
753 {
754 char *buf = (char *) bget(size);
755
756 if (buf != NULL) {
757 struct bhead *b;
758 bufsize rsize;
759
760 b = BH(buf - sizeof(struct bhead));
761 rsize = -(b->bsize);
762 if (rsize == 0) {
763 struct bdhead *bd;
764
765 bd = BDH(buf - sizeof(struct bdhead));
766 rsize = bd->tsize - sizeof(struct bdhead);
767 } else {
768 rsize -= sizeof(struct bhead);
769 }
770 assert(rsize >= size);
771 V memset(buf, 0, (MemSize) rsize);
772 }
773 return ((void *) buf);
774 }
775
776 /* BGETR -- Reallocate a buffer. This is a minimal implementation,
777 simply in terms of brel() and bget(). It could be
778 enhanced to allow the buffer to grow into adjacent free
779 blocks and to avoid moving data unnecessarily. */
780
781 void *bgetr(buf, size)
782 void *buf;
783 bufsize size;
784 {
785 void *nbuf;
786 bufsize osize; /* Old size of buffer */
787 struct bhead *b;
788
789 if ((nbuf = bget(size)) == NULL) { /* Acquire new buffer */
790 return NULL;
791 }
792 if (buf == NULL) {
793 return nbuf;
794 }
795 b = BH(((char *) buf) - sizeof(struct bhead));
796 osize = -b->bsize;
797 #ifdef BECtl
798 if (osize == 0) {
799 /* Buffer acquired directly through acqfcn. */
800 struct bdhead *bd;
801
802 bd = BDH(((char *) buf) - sizeof(struct bdhead));
803 osize = bd->tsize - sizeof(struct bdhead);
804 } else
805 #endif
806 osize -= sizeof(struct bhead);
807 assert(osize > 0);
808 V memcpy((char *) nbuf, (char *) buf, /* Copy the data */
809 (MemSize) ((size < osize) ? size : osize));
810 brel(buf);
811 return nbuf;
812 }
813
814 /* BREL -- Release a buffer. */
815
816 void brel(buf)
817 void *buf;
818 {
819 struct bfhead *b, *bn;
820
821 b = BFH(((char *) buf) - sizeof(struct bhead));
822 #ifdef BufStats
823 numrel++; /* Increment number of brel() calls */
824 #endif
825 assert(buf != NULL);
826
827 #ifdef BECtl
828 if (b->bh.bsize == 0) { /* Directly-acquired buffer? */
829 struct bdhead *bdh;
830
831 bdh = BDH(((char *) buf) - sizeof(struct bdhead));
832 assert(b->bh.prevfree == 0);
833 #ifdef BufStats
834 totalloc -= bdh->tsize;
835 assert(totalloc >= 0);
836 numdrel++; /* Number of direct releases */
837 #endif /* BufStats */
838 #ifdef FreeWipe
839 V memset((char *) buf, 0x55,
840 (MemSize) (bdh->tsize - sizeof(struct bdhead)));
841 #endif /* FreeWipe */
842 assert(relfcn != NULL);
843 (*relfcn)((void *) bdh); /* Release it directly. */
844 return;
845 }
846 #endif /* BECtl */
847
848 /* Buffer size must be negative, indicating that the buffer is
849 allocated. */
850
851 if (b->bh.bsize >= 0) {
852 bn = NULL;
853 }
854 assert(b->bh.bsize < 0);
855
856 /* Back pointer in next buffer must be zero, indicating the
857 same thing: */
858
859 assert(BH((char *) b - b->bh.bsize)->prevfree == 0);
860
861 #ifdef BufStats
862 totalloc += b->bh.bsize;
863 assert(totalloc >= 0);
864 #endif
865
866 /* If the back link is nonzero, the previous buffer is free. */
867
868 if (b->bh.prevfree != 0) {
869
870 /* The previous buffer is free. Consolidate this buffer with it
871 by adding the length of this buffer to the previous free
872 buffer. Note that we subtract the size in the buffer being
873 released, since it's negative to indicate that the buffer is
874 allocated. */
875
876 register bufsize size = b->bh.bsize;
877
878 /* Make the previous buffer the one we're working on. */
879 assert(BH((char *) b - b->bh.prevfree)->bsize == b->bh.prevfree);
880 b = BFH(((char *) b) - b->bh.prevfree);
881 b->bh.bsize -= size;
882 } else {
883
884 /* The previous buffer isn't allocated. Insert this buffer
885 on the free list as an isolated free block. */
886
887 assert(freelist.ql.blink->ql.flink == &freelist);
888 assert(freelist.ql.flink->ql.blink == &freelist);
889 b->ql.flink = &freelist;
890 b->ql.blink = freelist.ql.blink;
891 freelist.ql.blink = b;
892 b->ql.blink->ql.flink = b;
893 b->bh.bsize = -b->bh.bsize;
894 }
895
896 /* Now we look at the next buffer in memory, located by advancing from
897 the start of this buffer by its size, to see if that buffer is
898 free. If it is, we combine this buffer with the next one in
899 memory, dechaining the second buffer from the free list. */
900
901 bn = BFH(((char *) b) + b->bh.bsize);
902 if (bn->bh.bsize > 0) {
903
904 /* The buffer is free. Remove it from the free list and add
905 its size to that of our buffer. */
906
907 assert(BH((char *) bn + bn->bh.bsize)->prevfree == bn->bh.bsize);
908 assert(bn->ql.blink->ql.flink == bn);
909 assert(bn->ql.flink->ql.blink == bn);
910 bn->ql.blink->ql.flink = bn->ql.flink;
911 bn->ql.flink->ql.blink = bn->ql.blink;
912 b->bh.bsize += bn->bh.bsize;
913
914 /* Finally, advance to the buffer that follows the newly
915 consolidated free block. We must set its backpointer to the
916 head of the consolidated free block. We know the next block
917 must be an allocated block because the process of recombination
918 guarantees that two free blocks will never be contiguous in
919 memory. */
920
921 bn = BFH(((char *) b) + b->bh.bsize);
922 }
923 #ifdef FreeWipe
924 V memset(((char *) b) + sizeof(struct bfhead), 0x55,
925 (MemSize) (b->bh.bsize - sizeof(struct bfhead)));
926 #endif
927 assert(bn->bh.bsize < 0);
928
929 /* The next buffer is allocated. Set the backpointer in it to point
930 to this buffer; the previous free buffer in memory. */
931
932 bn->bh.prevfree = b->bh.bsize;
933
934 #ifdef BECtl
935
936 /* If a block-release function is defined, and this free buffer
937 constitutes the entire block, release it. Note that pool_len
938 is defined in such a way that the test will fail unless all
939 pool blocks are the same size. */
940
941 if (relfcn != NULL &&
942 ((bufsize) b->bh.bsize) == (pool_len - sizeof(struct bhead))) {
943
944 assert(b->bh.prevfree == 0);
945 assert(BH((char *) b + b->bh.bsize)->bsize == ESent);
946 assert(BH((char *) b + b->bh.bsize)->prevfree == b->bh.bsize);
947 /* Unlink the buffer from the free list */
948 b->ql.blink->ql.flink = b->ql.flink;
949 b->ql.flink->ql.blink = b->ql.blink;
950
951 (*relfcn)(b);
952 #ifdef BufStats
953 numprel++; /* Nr of expansion block releases */
954 numpblk--; /* Total number of blocks */
955 assert(numpblk == numpget - numprel);
956 #endif /* BufStats */
957 }
958 #endif /* BECtl */
959 }
960
961 #ifdef BECtl
962
963 /* BECTL -- Establish automatic pool expansion control */
964
965 void bectl(compact, acquire, release, pool_incr)
966 int (*compact) _((bufsize sizereq, int sequence));
967 void *(*acquire) _((bufsize size));
968 void (*release) _((void *buf));
969 bufsize pool_incr;
970 {
971 compfcn = compact;
972 acqfcn = acquire;
973 relfcn = release;
974 exp_incr = pool_incr;
975 }
976 #endif
977
978 /* BPOOL -- Add a region of memory to the buffer pool. */
979
980 void bpool(buf, len)
981 void *buf;
982 bufsize len;
983 {
984 struct bfhead *b = BFH(buf);
985 struct bhead *bn;
986
987 #ifdef SizeQuant
988 len &= ~(SizeQuant - 1);
989 #endif
990 #ifdef BECtl
991 if (pool_len == 0) {
992 pool_len = len;
993 } else if (len != pool_len) {
994 pool_len = -1;
995 }
996 #ifdef BufStats
997 numpget++; /* Number of block acquisitions */
998 numpblk++; /* Number of blocks total */
999 assert(numpblk == numpget - numprel);
1000 #endif /* BufStats */
1001 #endif /* BECtl */
1002
1003 /* Since the block is initially occupied by a single free buffer,
1004 it had better not be (much) larger than the largest buffer
1005 whose size we can store in bhead.bsize. */
1006
1007 assert(len - sizeof(struct bhead) <= -((bufsize) ESent + 1));
1008
1009 /* Clear the backpointer at the start of the block to indicate that
1010 there is no free block prior to this one. That blocks
1011 recombination when the first block in memory is released. */
1012
1013 b->bh.prevfree = 0;
1014
1015 /* Chain the new block to the free list. */
1016
1017 assert(freelist.ql.blink->ql.flink == &freelist);
1018 assert(freelist.ql.flink->ql.blink == &freelist);
1019 b->ql.flink = &freelist;
1020 b->ql.blink = freelist.ql.blink;
1021 freelist.ql.blink = b;
1022 b->ql.blink->ql.flink = b;
1023
1024 /* Create a dummy allocated buffer at the end of the pool. This dummy
1025 buffer is seen when a buffer at the end of the pool is released and
1026 blocks recombination of the last buffer with the dummy buffer at
1027 the end. The length in the dummy buffer is set to the largest
1028 negative number to denote the end of the pool for diagnostic
1029 routines (this specific value is not counted on by the actual
1030 allocation and release functions). */
1031
1032 len -= sizeof(struct bhead);
1033 b->bh.bsize = (bufsize) len;
1034 #ifdef FreeWipe
1035 V memset(((char *) b) + sizeof(struct bfhead), 0x55,
1036 (MemSize) (len - sizeof(struct bfhead)));
1037 #endif
1038 bn = BH(((char *) b) + len);
1039 bn->prevfree = (bufsize) len;
1040 /* Definition of ESent assumes two's complement! */
1041 assert((~0) == -1);
1042 bn->bsize = ESent;
1043 }
1044
1045 #ifdef BufStats
1046
1047 /* BSTATS -- Return buffer allocation free space statistics. */
1048
1049 void bstats(curalloc, totfree, maxfree, nget, nrel)
1050 bufsize *curalloc, *totfree, *maxfree;
1051 long *nget, *nrel;
1052 {
1053 struct bfhead *b = freelist.ql.flink;
1054
1055 *nget = numget;
1056 *nrel = numrel;
1057 *curalloc = totalloc;
1058 *totfree = 0;
1059 *maxfree = -1;
1060 while (b != &freelist) {
1061 assert(b->bh.bsize > 0);
1062 *totfree += b->bh.bsize;
1063 if (b->bh.bsize > *maxfree) {
1064 *maxfree = b->bh.bsize;
1065 }
1066 b = b->ql.flink; /* Link to next buffer */
1067 }
1068 }
1069
1070 #ifdef BECtl
1071
1072 /* BSTATSE -- Return extended statistics */
1073
1074 void bstatse(pool_incr, npool, npget, nprel, ndget, ndrel)
1075 bufsize *pool_incr;
1076 long *npool, *npget, *nprel, *ndget, *ndrel;
1077 {
1078 *pool_incr = (pool_len < 0) ? -exp_incr : exp_incr;
1079 *npool = numpblk;
1080 *npget = numpget;
1081 *nprel = numprel;
1082 *ndget = numdget;
1083 *ndrel = numdrel;
1084 }
1085 #endif /* BECtl */
1086 #endif /* BufStats */
1087
1088 #ifdef DumpData
1089
1090 /* BUFDUMP -- Dump the data in a buffer. This is called with the user
1091 data pointer, and backs up to the buffer header. It will
1092 dump either a free block or an allocated one. */
1093
1094 void bufdump(buf)
1095 void *buf;
1096 {
1097 struct bfhead *b;
1098 unsigned char *bdump;
1099 bufsize bdlen;
1100
1101 b = BFH(((char *) buf) - sizeof(struct bhead));
1102 assert(b->bh.bsize != 0);
1103 if (b->bh.bsize < 0) {
1104 bdump = (unsigned char *) buf;
1105 bdlen = (-b->bh.bsize) - sizeof(struct bhead);
1106 } else {
1107 bdump = (unsigned char *) (((char *) b) + sizeof(struct bfhead));
1108 bdlen = b->bh.bsize - sizeof(struct bfhead);
1109 }
1110
1111 while (bdlen > 0) {
1112 int i, dupes = 0;
1113 bufsize l = bdlen;
1114 char bhex[50], bascii[20];
1115
1116 if (l > 16) {
1117 l = 16;
1118 }
1119
1120 for (i = 0; i < l; i++) {
1121 V sprintf(bhex + i * 3, "%02X ", bdump[i]);
1122 bascii[i] = isprint(bdump[i]) ? bdump[i] : ' ';
1123 }
1124 bascii[i] = 0;
1125 V printf("%-48s %s\n", bhex, bascii);
1126 bdump += l;
1127 bdlen -= l;
1128 while ((bdlen > 16) && (memcmp((char *) (bdump - 16),
1129 (char *) bdump, 16) == 0)) {
1130 dupes++;
1131 bdump += 16;
1132 bdlen -= 16;
1133 }
1134 if (dupes > 1) {
1135 V printf(
1136 " (%d lines [%d bytes] identical to above line skipped)\n",
1137 dupes, dupes * 16);
1138 } else if (dupes == 1) {
1139 bdump -= 16;
1140 bdlen += 16;
1141 }
1142 }
1143 }
1144 #endif
1145
1146 #ifdef BufDump
1147
1148 /* BPOOLD -- Dump a buffer pool. The buffer headers are always listed.
1149 If DUMPALLOC is nonzero, the contents of allocated buffers
1150 are dumped. If DUMPFREE is nonzero, free blocks are
1151 dumped as well. If FreeWipe checking is enabled, free
1152 blocks which have been clobbered will always be dumped. */
1153
1154 void bpoold(buf, dumpalloc, dumpfree)
1155 void *buf;
1156 int dumpalloc, dumpfree;
1157 {
1158 struct bfhead *b = BFH(buf);
1159
1160 while (b->bh.bsize != ESent) {
1161 bufsize bs = b->bh.bsize;
1162
1163 if (bs < 0) {
1164 bs = -bs;
1165 V printf("Allocated buffer: size %6ld bytes.\n", (long) bs);
1166 if (dumpalloc) {
1167 bufdump((void *) (((char *) b) + sizeof(struct bhead)));
1168 }
1169 } else {
1170 char *lerr = "";
1171
1172 assert(bs > 0);
1173 if ((b->ql.blink->ql.flink != b) ||
1174 (b->ql.flink->ql.blink != b)) {
1175 lerr = " (Bad free list links)";
1176 }
1177 V printf("Free block: size %6ld bytes.%s\n",
1178 (long) bs, lerr);
1179 #ifdef FreeWipe
1180 lerr = ((char *) b) + sizeof(struct bfhead);
1181 if ((bs > sizeof(struct bfhead)) && ((*lerr != 0x55) ||
1182 (memcmp(lerr, lerr + 1,
1183 (MemSize) (bs - (sizeof(struct bfhead) + 1))) != 0))) {
1184 V printf(
1185 "(Contents of above free block have been overstored.)\n");
1186 bufdump((void *) (((char *) b) + sizeof(struct bhead)));
1187 } else
1188 #endif
1189 if (dumpfree) {
1190 bufdump((void *) (((char *) b) + sizeof(struct bhead)));
1191 }
1192 }
1193 b = BFH(((char *) b) + bs);
1194 }
1195 }
1196 #endif /* BufDump */
1197
1198 #ifdef BufValid
1199
1200 /* BPOOLV -- Validate a buffer pool. If NDEBUG isn't defined,
1201 any error generates an assertion failure. */
1202
1203 int bpoolv(buf)
1204 void *buf;
1205 {
1206 struct bfhead *b = BFH(buf);
1207
1208 while (b->bh.bsize != ESent) {
1209 bufsize bs = b->bh.bsize;
1210
1211 if (bs < 0) {
1212 bs = -bs;
1213 } else {
1214 char *lerr = "";
1215
1216 assert(bs > 0);
1217 if (bs <= 0) {
1218 return 0;
1219 }
1220 if ((b->ql.blink->ql.flink != b) ||
1221 (b->ql.flink->ql.blink != b)) {
1222 V printf("Free block: size %6ld bytes. (Bad free list links)\n",
1223 (long) bs);
1224 assert(0);
1225 return 0;
1226 }
1227 #ifdef FreeWipe
1228 lerr = ((char *) b) + sizeof(struct bfhead);
1229 if ((bs > sizeof(struct bfhead)) && ((*lerr != 0x55) ||
1230 (memcmp(lerr, lerr + 1,
1231 (MemSize) (bs - (sizeof(struct bfhead) + 1))) != 0))) {
1232 V printf(
1233 "(Contents of above free block have been overstored.)\n");
1234 bufdump((void *) (((char *) b) + sizeof(struct bhead)));
1235 assert(0);
1236 return 0;
1237 }
1238 #endif
1239 }
1240 b = BFH(((char *) b) + bs);
1241 }
1242 return 1;
1243 }
1244 #endif /* BufValid */
1245
1246 /***********************\
1247 * *
1248 * Built-in test program *
1249 * *
1250 \***********************/
1251
1252 #ifdef TestProg
1253
1254 #define Repeatable 1 /* Repeatable pseudorandom sequence */
1255 /* If Repeatable is not defined, a
1256 time-seeded pseudorandom sequence
1257 is generated, exercising BGET with
1258 a different pattern of calls on each
1259 run. */
1260 #define OUR_RAND /* Use our own built-in version of
1261 rand() to guarantee the test is
1262 100% repeatable. */
1263
1264 #ifdef BECtl
1265 #define PoolSize 300000 /* Test buffer pool size */
1266 #else
1267 #define PoolSize 50000 /* Test buffer pool size */
1268 #endif
1269 #define ExpIncr 32768 /* Test expansion block size */
1270 #define CompactTries 10 /* Maximum tries at compacting */
1271
1272 #define dumpAlloc 0 /* Dump allocated buffers ? */
1273 #define dumpFree 0 /* Dump free buffers ? */
1274
1275 #ifndef Repeatable
1276 extern long time();
1277 #endif
1278
1279 extern char *malloc();
1280 extern int free _((char *));
1281
1282 static char *bchain = NULL; /* Our private buffer chain */
1283 static char *bp = NULL; /* Our initial buffer pool */
1284
1285 #include <math.h>
1286
1287 #ifdef OUR_RAND
1288
1289 static unsigned long int next = 1;
1290
1291 /* Return next random integer */
1292
1293 int rand()
1294 {
1295 next = next * 1103515245L + 12345;
1296 return (unsigned int) (next / 65536L) % 32768L;
1297 }
1298
1299 /* Set seed for random generator */
1300
1301 void srand(seed)
1302 unsigned int seed;
1303 {
1304 next = seed;
1305 }
1306 #endif
1307
1308 /* STATS -- Edit statistics returned by bstats() or bstatse(). */
1309
1310 static void stats(when)
1311 char *when;
1312 {
1313 bufsize cural, totfree, maxfree;
1314 long nget, nfree;
1315 #ifdef BECtl
1316 bufsize pincr;
1317 long totblocks, npget, nprel, ndget, ndrel;
1318 #endif
1319
1320 bstats(&cural, &totfree, &maxfree, &nget, &nfree);
1321 V printf(
1322 "%s: %ld gets, %ld releases. %ld in use, %ld free, largest = %ld\n",
1323 when, nget, nfree, (long) cural, (long) totfree, (long) maxfree);
1324 #ifdef BECtl
1325 bstatse(&pincr, &totblocks, &npget, &nprel, &ndget, &ndrel);
1326 V printf(
1327 " Blocks: size = %ld, %ld (%ld bytes) in use, %ld gets, %ld frees\n",
1328 (long)pincr, totblocks, pincr * totblocks, npget, nprel);
1329 V printf(" %ld direct gets, %ld direct frees\n", ndget, ndrel);
1330 #endif /* BECtl */
1331 }
1332
1333 #ifdef BECtl
1334 static int protect = 0; /* Disable compaction during bgetr() */
1335
1336 /* BCOMPACT -- Compaction call-back function. */
1337
1338 static int bcompact(bsize, seq)
1339 bufsize bsize;
1340 int seq;
1341 {
1342 #ifdef CompactTries
1343 char *bc = bchain;
1344 int i = rand() & 0x3;
1345
1346 #ifdef COMPACTRACE
1347 V printf("Compaction requested. %ld bytes needed, sequence %d.\n",
1348 (long) bsize, seq);
1349 #endif
1350
1351 if (protect || (seq > CompactTries)) {
1352 #ifdef COMPACTRACE
1353 V printf("Compaction gave up.\n");
1354 #endif
1355 return 0;
1356 }
1357
1358 /* Based on a random cast, release a random buffer in the list
1359 of allocated buffers. */
1360
1361 while (i > 0 && bc != NULL) {
1362 bc = *((char **) bc);
1363 i--;
1364 }
1365 if (bc != NULL) {
1366 char *fb;
1367
1368 fb = *((char **) bc);
1369 if (fb != NULL) {
1370 *((char **) bc) = *((char **) fb);
1371 brel((void *) fb);
1372 return 1;
1373 }
1374 }
1375
1376 #ifdef COMPACTRACE
1377 V printf("Compaction bailed out.\n");
1378 #endif
1379 #endif /* CompactTries */
1380 return 0;
1381 }
1382
1383 /* BEXPAND -- Expand pool call-back function. */
1384
1385 static void *bexpand(size)
1386 bufsize size;
1387 {
1388 void *np = NULL;
1389 bufsize cural, totfree, maxfree;
1390 long nget, nfree;
1391
1392 /* Don't expand beyond the total allocated size given by PoolSize. */
1393
1394 bstats(&cural, &totfree, &maxfree, &nget, &nfree);
1395
1396 if (cural < PoolSize) {
1397 np = (void *) malloc((unsigned) size);
1398 }
1399 #ifdef EXPTRACE
1400 V printf("Expand pool by %ld -- %s.\n", (long) size,
1401 np == NULL ? "failed" : "succeeded");
1402 #endif
1403 return np;
1404 }
1405
1406 /* BSHRINK -- Shrink buffer pool call-back function. */
1407
1408 static void bshrink(buf)
1409 void *buf;
1410 {
1411 if (((char *) buf) == bp) {
1412 #ifdef EXPTRACE
1413 V printf("Initial pool released.\n");
1414 #endif
1415 bp = NULL;
1416 }
1417 #ifdef EXPTRACE
1418 V printf("Shrink pool.\n");
1419 #endif
1420 free((char *) buf);
1421 }
1422
1423 #endif /* BECtl */
1424
1425 /* Restrict buffer requests to those large enough to contain our pointer and
1426 small enough for the CPU architecture. */
1427
1428 static bufsize blimit(bs)
1429 bufsize bs;
1430 {
1431 if (bs < sizeof(char *)) {
1432 bs = sizeof(char *);
1433 }
1434
1435 /* This is written out in this ugly fashion because the
1436 cool expression in sizeof(int) that auto-configured
1437 to any length int befuddled some compilers. */
1438
1439 if (sizeof(int) == 2) {
1440 if (bs > 32767) {
1441 bs = 32767;
1442 }
1443 } else {
1444 if (bs > 200000) {
1445 bs = 200000;
1446 }
1447 }
1448 return bs;
1449 }
1450
1451 int main()
1452 {
1453 int i;
1454 double x;
1455
1456 /* Seed the random number generator. If Repeatable is defined, we
1457 always use the same seed. Otherwise, we seed from the clock to
1458 shake things up from run to run. */
1459
1460 #ifdef Repeatable
1461 V srand(1234);
1462 #else
1463 V srand((int) time((long *) NULL));
1464 #endif
1465
1466 /* Compute x such that pow(x, p) ranges between 1 and 4*ExpIncr as
1467 p ranges from 0 to ExpIncr-1, with a concentration in the lower
1468 numbers. */
1469
1470 x = 4.0 * ExpIncr;
1471 x = log(x);
1472 x = exp(log(4.0 * ExpIncr) / (ExpIncr - 1.0));
1473
1474 #ifdef BECtl
1475 bectl(bcompact, bexpand, bshrink, (bufsize) ExpIncr);
1476 bp = malloc(ExpIncr);
1477 assert(bp != NULL);
1478 bpool((void *) bp, (bufsize) ExpIncr);
1479 #else
1480 bp = malloc(PoolSize);
1481 assert(bp != NULL);
1482 bpool((void *) bp, (bufsize) PoolSize);
1483 #endif
1484
1485 stats("Create pool");
1486 V bpoolv((void *) bp);
1487 bpoold((void *) bp, dumpAlloc, dumpFree);
1488
1489 for (i = 0; i < TestProg; i++) {
1490 char *cb;
1491 bufsize bs = pow(x, (double) (rand() & (ExpIncr - 1)));
1492
1493 assert(bs <= (((bufsize) 4) * ExpIncr));
1494 bs = blimit(bs);
1495 if (rand() & 0x400) {
1496 cb = (char *) bgetz(bs);
1497 } else {
1498 cb = (char *) bget(bs);
1499 }
1500 if (cb == NULL) {
1501 #ifdef EasyOut
1502 break;
1503 #else
1504 char *bc = bchain;
1505
1506 if (bc != NULL) {
1507 char *fb;
1508
1509 fb = *((char **) bc);
1510 if (fb != NULL) {
1511 *((char **) bc) = *((char **) fb);
1512 brel((void *) fb);
1513 }
1514 continue;
1515 }
1516 #endif
1517 }
1518 *((char **) cb) = (char *) bchain;
1519 bchain = cb;
1520
1521 /* Based on a random cast, release a random buffer in the list
1522 of allocated buffers. */
1523
1524 if ((rand() & 0x10) == 0) {
1525 char *bc = bchain;
1526 int i = rand() & 0x3;
1527
1528 while (i > 0 && bc != NULL) {
1529 bc = *((char **) bc);
1530 i--;
1531 }
1532 if (bc != NULL) {
1533 char *fb;
1534
1535 fb = *((char **) bc);
1536 if (fb != NULL) {
1537 *((char **) bc) = *((char **) fb);
1538 brel((void *) fb);
1539 }
1540 }
1541 }
1542
1543 /* Based on a random cast, reallocate a random buffer in the list
1544 to a random size */
1545
1546 if ((rand() & 0x20) == 0) {
1547 char *bc = bchain;
1548 int i = rand() & 0x3;
1549
1550 while (i > 0 && bc != NULL) {
1551 bc = *((char **) bc);
1552 i--;
1553 }
1554 if (bc != NULL) {
1555 char *fb;
1556
1557 fb = *((char **) bc);
1558 if (fb != NULL) {
1559 char *newb;
1560
1561 bs = pow(x, (double) (rand() & (ExpIncr - 1)));
1562 bs = blimit(bs);
1563 #ifdef BECtl
1564 protect = 1; /* Protect against compaction */
1565 #endif
1566 newb = (char *) bgetr((void *) fb, bs);
1567 #ifdef BECtl
1568 protect = 0;
1569 #endif
1570 if (newb != NULL) {
1571 *((char **) bc) = newb;
1572 }
1573 }
1574 }
1575 }
1576 }
1577 stats("\nAfter allocation");
1578 if (bp != NULL) {
1579 V bpoolv((void *) bp);
1580 bpoold((void *) bp, dumpAlloc, dumpFree);
1581 }
1582
1583 while (bchain != NULL) {
1584 char *buf = bchain;
1585
1586 bchain = *((char **) buf);
1587 brel((void *) buf);
1588 }
1589 stats("\nAfter release");
1590 #ifndef BECtl
1591 if (bp != NULL) {
1592 V bpoolv((void *) bp);
1593 bpoold((void *) bp, dumpAlloc, dumpFree);
1594 }
1595 #endif
1596
1597 return 0;
1598 }
1599 #endif