此页面仅适用于使用保管库的 S3 Glacier 服务的现有客户以及 2012 年以RESTAPI来的原始客户。
如果您正在寻找档案存储解决方案,我们建议您在亚马逊 S3、S3 Glacier 即时检索、S3 Glacier 灵活检索和 S3 Glacier Deep Archive Dee p Archive 中使用 S3 Glacier 存储类。要了解有关这些存储选项的更多信息,请参阅 Amazon S3 用户指南中的 S3 Glacier 存储类
本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。
使用以下方法将档案上传到 S3 Glacier 中的文件库 Amazon SDK for Java
以下 Java 代码示例使用API的高级代码将示例档案上传 Amazon SDK for Java 到文件库。在代码示例中,请注意以下情况:
-
以下示例创建
AmazonGlacierClient
类的实例。 -
该示例使用了高级
ArchiveTransferManager
类API的upload
API操作 Amazon SDK for Java。 该示例使用美国西部(俄勒冈州)区域 (
us-west-2
)。
有关如何运行此示例的 step-by-step 说明,请参阅使用 Eclipse 运行 Amazon S3 Glacier 的 Java 示例。您必须更新待上传档案文件名称旁显示的代码。
注意
Amazon S3 Glacier 在文件库中保留一份所有档案的清单。当您上传以下示例中的档案时,该档案直到文件库清单已更新后才会在管理控制台的文件库中显示。此更新通常每天进行一次。
- SDK适用于 Java 2.x
-
注意
还有更多相关信息 GitHub。查找完整示例,学习如何在 Amazon 代码示例存储库
中进行设置和运行。 import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.glacier.GlacierClient; import software.amazon.awssdk.services.glacier.model.UploadArchiveRequest; import software.amazon.awssdk.services.glacier.model.UploadArchiveResponse; import software.amazon.awssdk.services.glacier.model.GlacierException; import java.io.File; import java.nio.file.Path; import java.nio.file.Paths; import java.io.FileInputStream; import java.io.IOException; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class UploadArchive { static final int ONE_MB = 1024 * 1024; public static void main(String[] args) { final String usage = """ Usage: <strPath> <vaultName>\s Where: strPath - The path to the archive to upload (for example, C:\\AWS\\test.pdf). vaultName - The name of the vault. """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String strPath = args[0]; String vaultName = args[1]; File myFile = new File(strPath); Path path = Paths.get(strPath); GlacierClient glacier = GlacierClient.builder() .region(Region.US_EAST_1) .build(); String archiveId = uploadContent(glacier, path, vaultName, myFile); System.out.println("The ID of the archived item is " + archiveId); glacier.close(); } public static String uploadContent(GlacierClient glacier, Path path, String vaultName, File myFile) { // Get an SHA-256 tree hash value. String checkVal = computeSHA256(myFile); try { UploadArchiveRequest uploadRequest = UploadArchiveRequest.builder() .vaultName(vaultName) .checksum(checkVal) .build(); UploadArchiveResponse res = glacier.uploadArchive(uploadRequest, path); return res.archiveId(); } catch (GlacierException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } return ""; } private static String computeSHA256(File inputFile) { try { byte[] treeHash = computeSHA256TreeHash(inputFile); System.out.printf("SHA-256 tree hash = %s\n", toHex(treeHash)); return toHex(treeHash); } catch (IOException ioe) { System.err.format("Exception when reading from file %s: %s", inputFile, ioe.getMessage()); System.exit(-1); } catch (NoSuchAlgorithmException nsae) { System.err.format("Cannot locate MessageDigest algorithm for SHA-256: %s", nsae.getMessage()); System.exit(-1); } return ""; } public static byte[] computeSHA256TreeHash(File inputFile) throws IOException, NoSuchAlgorithmException { byte[][] chunkSHA256Hashes = getChunkSHA256Hashes(inputFile); return computeSHA256TreeHash(chunkSHA256Hashes); } /** * Computes an SHA256 checksum for each 1 MB chunk of the input file. This * includes the checksum for the last chunk, even if it's smaller than 1 MB. */ public static byte[][] getChunkSHA256Hashes(File file) throws IOException, NoSuchAlgorithmException { MessageDigest md = MessageDigest.getInstance("SHA-256"); long numChunks = file.length() / ONE_MB; if (file.length() % ONE_MB > 0) { numChunks++; } if (numChunks == 0) { return new byte[][] { md.digest() }; } byte[][] chunkSHA256Hashes = new byte[(int) numChunks][]; FileInputStream fileStream = null; try { fileStream = new FileInputStream(file); byte[] buff = new byte[ONE_MB]; int bytesRead; int idx = 0; while ((bytesRead = fileStream.read(buff, 0, ONE_MB)) > 0) { md.reset(); md.update(buff, 0, bytesRead); chunkSHA256Hashes[idx++] = md.digest(); } return chunkSHA256Hashes; } finally { if (fileStream != null) { try { fileStream.close(); } catch (IOException ioe) { System.err.printf("Exception while closing %s.\n %s", file.getName(), ioe.getMessage()); } } } } /** * Computes the SHA-256 tree hash for the passed array of 1 MB chunk * checksums. */ public static byte[] computeSHA256TreeHash(byte[][] chunkSHA256Hashes) throws NoSuchAlgorithmException { MessageDigest md = MessageDigest.getInstance("SHA-256"); byte[][] prevLvlHashes = chunkSHA256Hashes; while (prevLvlHashes.length > 1) { int len = prevLvlHashes.length / 2; if (prevLvlHashes.length % 2 != 0) { len++; } byte[][] currLvlHashes = new byte[len][]; int j = 0; for (int i = 0; i < prevLvlHashes.length; i = i + 2, j++) { // If there are at least two elements remaining. if (prevLvlHashes.length - i > 1) { // Calculate a digest of the concatenated nodes. md.reset(); md.update(prevLvlHashes[i]); md.update(prevLvlHashes[i + 1]); currLvlHashes[j] = md.digest(); } else { // Take care of the remaining odd chunk currLvlHashes[j] = prevLvlHashes[i]; } } prevLvlHashes = currLvlHashes; } return prevLvlHashes[0]; } /** * Returns the hexadecimal representation of the input byte array */ public static String toHex(byte[] data) { StringBuilder sb = new StringBuilder(data.length * 2); for (byte datum : data) { String hex = Integer.toHexString(datum & 0xFF); if (hex.length() == 1) { // Append leading zero. sb.append("0"); } sb.append(hex); } return sb.toString().toLowerCase(); } }
-
有关API详细信息,请参阅 “Amazon SDK for Java 2.x API参考 UploadArchive” 中的。
-