Archived Forum Post

Index of archived forum posts


memory issue when appending data to a CkByteData

Dec 24 '12 at 09:51

In my application, I am adding data to a zip-file by dynamically appending to a CkByteData, which works fine in most cases. However, in some cases this does not work due to memory issues: I receive a memory allocation failure and the size of my CkByteData is set to zero.

In my normal (rather heavy) application, this issue occurs when the size of the CkByteData is around 78MB (for 32 bit; for 64bit, this issue does not occur apparently). In a small sample application (see below), the upper limit seems to be around 550 MB on my machine.

Is this the expected behavior? How can I influence this? Is there a workaround?

Here is the small sample program describing what I do:

   string pakFileName = "";
   string password = "aa";
   string contentsFileName = "";
   CkByteData buffer;

size_t increment = 102894; size_t counter = 0; while (counter < 40000) { if (void *dummyBytes = malloc(increment * sizeof(unsigned char))) { buffer.append2(static_cast<const unsigned="" char*="">(dummyBytes), (unsigned long) increment); free(dummyBytes); }

  const unsigned char *data = buffer.getDataAt((unsigned long) counter*increment);
  if (!data)
     cout << "oops";

return 0;


There is nothing unusual going on within CkByteData. If you are running out of memory, it's just the simple fact that you ran out of memory.

The CkByteData class has an preAllocate method:

void CkByteData::preAllocate(unsigned long expectedNumBytes);
This can be called to pre-allocate an internal buffer that is large enough to hold the anticipated amount of data that may be appended. This can be a performance boost to avoid internal re-allocations that may happen when additional data is appended, but the internal buffer is not large enough to hold the existing + newly appended data.

For example, if you know your application is likely to use up to 100MB of data, then pre-allocate 100MB. I notice that preAllocate does not return true/false. This will be modified for v4.7.1 so that the caller can know whether the pre-allocate succeeded or not.


Thanks for the quick answer!

I know that using the pre-allocate mechanism might make more sense here, but there is a high uncertainty on the expected size of the buffer (depends on input from the user), so I don't know whether it would pay off in my situation.

Nevertheless, I still find it a bit strange that the sample program would run out of memory after only 550MB have been allocated, whereas if I allocate some memory via new (e.g. via new int[]), I can reach up to 2 GB of memory. Any ideas on this?