Apache Commons Compress 1.19 released

Apache Commons Compress 1.19 release, this is primarily a bug fix release.

New features

  • You can now skip the parsing of local file header when using ZipFile, which may accelerate read the archive, but at the cost may be missing important information
  • TarArchiveInputStream have a new constructor arg lenient, it can be used to accept some destruction of the archive
  • ArjArchiveEntry and SevenZArchiveEntry now implement hashCode and equals
  • MultiReadOnlySeekableByteChannel added a class, it can be used to connect the various parts of a multi-volume archive 7z, so that they can be read SevenZFile

Bug fixes

  • ZipArchiveInputStream may forget compression level and in some cases have changed
  • Fixed ParallelScatterZipCreator # writeTo another potential resource leaks
  • For some malformed input LZ4 or Snappy, an IOException instead RuntimeExceptions
  • If the unsigned data descriptor InfoZIP invention is used, ZipArchiveInputStream not use data entry stored in the descriptor reading

For more details, see the announcement .

Download: https://commons.apache.org/proper/commons-compress/download_compress.cgi

Commons Compress the file to achieve compression or decompression to tar, zip, bzip2 format.

The following code into a zip archive format:

ArArchiveEntry entry = new ArArchiveEntry(name, size);
arOutput.putArchiveEntry(entry);
arOutput.write(contentOfEntry);
arOutput.closeArchiveEntry();

Zip file compression solution:

ArArchiveEntry entry = (ArArchiveEntry) arInput.getNextEntry();
byte[] content = new byte[entry.getSize()];
LOOP UNTIL entry.getSize() HAS BEEN READ {
    arInput.read(content, offset, content.length - offset);
}

Guess you like

Origin www.oschina.net/news/109429/apache-commons-compress-1-19-released