j_e_f_f_g wrote:male wrote:Wrong. This is the least likely cause you could imagine for a segfault.
The issue with the OOM (ie, Out Of Memory) Manager is as usual another example of you arguing with your own straw man.
You seem to be confusing me and male - I was the one who brought up memory overcommit and OOM.
I'm not sure which 'straw man' you claim I'm arguing. You claimed the code was buggy because it didn't check for malloc() returning NULL, and I put it into perspective by claiming that 1) the only situation where that would do any good would be in an OOM situation, and 2) it wouldn't do much good in an OOM situation on a typical system.
j_e_f_f_g wrote:It's a total fallacy programmers have that malloc() won't return 0 due to the OOM.
Uh, no. (I'll ignore the 0-versus-NULL debate to try and keep this on-topic)
On Linux, due to memory overcommit, malloc()
might not return NULL even if there's insufficient memory to back your malloc(). A simple example program can demonstrate this: this program will try to allocate 12 gigs of memory. On my configuration, all these malloc() calls return a non-NULL value. Obviously, since my machine doesn't have 12 gigs of memory (and I don't use swap), this can't work - and indeed if i try to actually use the memory (in this case: writing some 'y' characters into it), it'll grow too big and get killed by the OOM killer.
Code: Select all
#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
#include <assert.h>
int main() {
unsigned long gig_in_bytes = 1024 * 1024 * 1024;
// This example assumes an architecture where the smallest addressable unit
// is a byte, and the maximum size of size_t is at least a gig (i.e. any
// modern system with a 32-bit architecture)
assert(sizeof(unsigned char) == 1);
assert(gig_in_bytes < SIZE_MAX);
int gigs_to_alloc = 12;
unsigned char * allocated_chunks[gigs_to_alloc];
int i;
unsigned long j;
// first allocate a generous amount of memory. Depending on overcommit
// settings, this might not return NULL even if this allocates more than
// the physically available amount of memory.
for (i = 0; i < gigs_to_alloc; i++) {
allocated_chunks[i] = (unsigned char*) malloc(gig_in_bytes);
assert(allocated_chunks[i] != NULL);
printf("Malloc'ed %d gig in total now\n", i);
}
// Now actually use the memory (by writing into it)
for (i = 0; i < gigs_to_alloc; i++) {
for (j = 0; j < gig_in_bytes; j += 10000)
allocated_chunks[i][j] = 'y';
}
printf("Done\n");
return 0;
}
So, this example shows even allocating huge 1-gig chunks of memory on a machine that doesn't have them available doesn't always make malloc() return NULL - which j_e_f_f_g above claimed was a 'total fallacy'. Therefore I stand by my earlier claim that checking the return value of malloc() for a small number of small allocations is unlikely to improve the stability of your application when running on a typical Linux system.
Of course this doesn't necessarily mean checking the return value of malloc() is useless. It's not that hard to think of scenario's where it could be a good idea. Nonetheless, I hope this does put j_e_f_f_g's bold claims above into some perspective.