Luke Hutchison
2018-08-08 05:19:06 UTC
The uncompressed size of a resource is used in
BasicImageReader#getResource(ImageLocation), which calls readBuffer(offset,
uncompressedSize) . The readBuffer method then does two or three things
that seem incorrect if the size is obtained from ZipEntry#getSize():
(1) size < 0 is rejected, but ZipEntry#getSize() returns "the uncompressed
size of the entry data, or -1 if not known":
if (size < 0 || Integer.MAX_VALUE <= size) {
throw new IndexOutOfBoundsException("Bad size: " + size);
}
(2) size is used directly without restriction, rather than as a size hint.
ZipEntry#getSize() may actually return a totally bogus value (you can write
anything you want for the uncompressed size in a zipfile entry). If the
requested size is large, e.g. something close to Integer.MAX_VALUE, large
allocations will occur every time a resource is read, which can be used as
a memory allocation attack vector.
ByteBuffer buffer = slice(memoryMap, (int)offset, (int)size);
// ...
ByteBuffer buffer = ImageBufferCache.getBuffer(size);
Instead, the initial buffer size should be set to something like
Math.min(16 * 1024 * 1024, size), and then grown as needed if the resource
is indeed larger than this initial allocation.
(3) The BasicImageReader#readBuffer(long offset, long size) checks if the
filesize is larger than Integer.MAX_VALUE. However, InputStream contains
the following max limit for byte array sizes:
/**
* The maximum size of array to allocate.
* Some VMs reserve some header words in an array.
* Attempts to allocate larger arrays may result in
* OutOfMemoryError: Requested array size exceeds VM limit
*/
private static final int MAX_BUFFER_SIZE = Integer.MAX_VALUE - 8;
If this limit applies to ByteBuffers (and it does apply to ByteBuffers if
they are array-backed), then the check of size against Integer.MAX_VALUE is
incorrect.
BasicImageReader#getResource(ImageLocation), which calls readBuffer(offset,
uncompressedSize) . The readBuffer method then does two or three things
that seem incorrect if the size is obtained from ZipEntry#getSize():
(1) size < 0 is rejected, but ZipEntry#getSize() returns "the uncompressed
size of the entry data, or -1 if not known":
if (size < 0 || Integer.MAX_VALUE <= size) {
throw new IndexOutOfBoundsException("Bad size: " + size);
}
(2) size is used directly without restriction, rather than as a size hint.
ZipEntry#getSize() may actually return a totally bogus value (you can write
anything you want for the uncompressed size in a zipfile entry). If the
requested size is large, e.g. something close to Integer.MAX_VALUE, large
allocations will occur every time a resource is read, which can be used as
a memory allocation attack vector.
ByteBuffer buffer = slice(memoryMap, (int)offset, (int)size);
// ...
ByteBuffer buffer = ImageBufferCache.getBuffer(size);
Instead, the initial buffer size should be set to something like
Math.min(16 * 1024 * 1024, size), and then grown as needed if the resource
is indeed larger than this initial allocation.
(3) The BasicImageReader#readBuffer(long offset, long size) checks if the
filesize is larger than Integer.MAX_VALUE. However, InputStream contains
the following max limit for byte array sizes:
/**
* The maximum size of array to allocate.
* Some VMs reserve some header words in an array.
* Attempts to allocate larger arrays may result in
* OutOfMemoryError: Requested array size exceeds VM limit
*/
private static final int MAX_BUFFER_SIZE = Integer.MAX_VALUE - 8;
If this limit applies to ByteBuffers (and it does apply to ByteBuffers if
they are array-backed), then the check of size against Integer.MAX_VALUE is
incorrect.