How to create new file on GCS bucket java - java

I'm trying to create a new file in one of my google cloud storage buckets.
Here is the code I wrote:
InputStream stream = new ByteArrayInputStream(report.getBytes(StandardCharsets.UTF_8.name()));
GcsFilename filename = new GcsFilename(BUCKET_NAME, csvFileName);
GcsFileOptions options = GcsFileOptions.getDefaultInstance();
GcsOutputChannel outputChannel = gcsService.createOrReplace(filename, options);
copy(stream, Channels.newOutputStream(outputChannel));
The code runs without errors, but the bucket remains empty.
I'm 99% sure that I wrote the code correctly (it works for me in another project), but the problem is about the GCS bucket settings.
Thanks.

Related

Upload file from local server storage to production cloud storage in app engine

I am able to upload a file using GCS. The file is getting stored and is visible in admin UI. Same when I run my code in production it works fine. In production, I am able to view these file in cloud platform. Now I want to upload the file from local to production for my current case. The code I am using is given below.
public static void uploadFile(GcsService gcsService, FileProperty fileProperty, InputStream inputStream, String bucket) throws IOException {
GcsFileOptions gcsFileOptions = new GcsFileOptions.Builder()
.mimeType(fileProperty.getContentType())
.acl("public-read").build();
GcsFilename fileName = new GcsFilename(bucket,
fileProperty.getName() + "." + fileProperty.getExtension());
GcsOutputChannel outputChannel;
outputChannel = gcsService.createOrReplace(fileName, gcsFileOptions);
transfer(inputStream, Channels.newOutputStream(outputChannel));
}

Is there a way to write and create a text file in the google drive?

I have a simple server I'm writing that writes all its actions to a log file. Write now the file is created in the current directory. Is there a way to create a file and write to in the google drive so I can see the actions of the server in remote locations??
My Current code
void NewFile(String path, String data){
try{
File file =new File(path);
//if file doesnt exists, then create it
if(!file.exists()){
file.createNewFile();
}
//true = append file
FileWriter fileWritter = new FileWriter(file.getName());
BufferedWriter bufferWritter = new BufferedWriter(fileWritter);
bufferWritter.write(data);
bufferWritter.write("\r" );
bufferWritter.write("\n" );
bufferWritter.close();
}catch(IOException e){
e.printStackTrace();
}
}
You can follow the sample on Quickstart: Run a Drive App in Java
Here I just pasted the specific code segment from that
//Create a new authorized API client
Drive service = new Drive.Builder(httpTransport, jsonFactory, credential).build();
//Insert a file
File body = new File();
body.setTitle("My document");
body.setDescription("A test document");
body.setMimeType("text/plain");
java.io.File fileContent = new java.io.File("document.txt");
FileContent mediaContent = new FileContent("text/plain", fileContent);
File file = service.files().insert(body, mediaContent).execute();
Yes, you can. Check out Google Doc's API documentation.
The Google Documents List API allows developers to create, retrieve,
update, and delete Google Docs (including but not limited to text
documents, spreadsheets, presentations, and drawings), files, and
collections. It also provides some advanced features like resource
archives, Optical Character Recognition, translation, and revision
history.
https://developers.google.com/google-apps/documents-list/#what_can_this_api_do

How to read large file from Amazon S3?

I have a program which will read a textfile from Amazon s3, but the file is around 400M. I have increased my Heap size but i'm still getting the Java Heap Size error. So, I'm not sure if my code is correct or not. I'm using Amazon SDK for java and Guava to deal with the file stream.
Please help
S3Object object = s3Client.getObject(new GetObjectRequest(bucketName, folder + filename));
final InputStream objectData = object.getObjectContent();
InputSupplier supplier = CharStreams.newReaderSupplier(new InputSupplier() {
#Override
public InputStream getInput() throws IOException {
return objectData;
}
}, Charsets.UTF_8);
String content = CharStreams.toString(supplier);
objectData.close();
return content;
I use this option for my JVM. -Xms512m -Xmx2g. I use ant to run the main program so I include the jvm option to ANT_OPTS as well. But it's still not working.
The point of InputSupplier -- though you should be using ByteSource and CharSource these days -- is that you should never have access to the InputStream from the outside, so you don't have to remember to close it or not.
If you're using an old version of Guava before ByteSource and CharSource were introduced, then this should be
InputSupplier supplier = CharStreams.newReaderSupplier(new InputSupplier() {
#Override
public InputStream getInput() throws IOException {
S3Object object = s3Client.getObject(
new GetObjectRequest(bucketName, folder + filename));
return object.getObjectContent();
}
}, Charsets.UTF_8);
String content = CharStreams.toString(supplier);
If you're using Guava 14, then this can be done more fluently as
new ByteSource() {
#Override public InputStream openStream() throws IOException {
S3Object object = s3Client.getObject(
new GetObjectRequest(bucketName, folder + filename));
return object.getObjectContent();
}
}.asCharSource(Charsets.UTF_8).read();
That said: your file might be 400MB, but Java Strings are stored as UTF-16, which can easily double its memory consumption. You may either need lots more memory, or you need to figure out a way to avoid keeping the whole file in memory at once.
Rather than taking whole file in memory you can read file by parts so your whole file will not been in memory . Avoid taking whole file in memory so that you wont get memory issue because of limited memory
GetObjectRequest rangeObjectRequest = new GetObjectRequest(bucketName, key);
rangeObjectRequest.setRange(0, 1000); // retrieve 1st 1000 bytes.
S3Object objectPortion = s3Client.getObject(rangeObjectRequest);
InputStream objectData = objectPortion.getObjectContent();
//Go in loop now and make file locally by reading content from s3 and append file in loop so there wont be whole content in memory

Google cloud storage file location in google app engine

I have used below code to write somthing to my file google cloud storage.
FileService fileService = FileServiceFactory.getFileService();
GSFileOptionsBuilder optionsBuilder = new GSFileOptionsBuilder()
.setBucket(BUCKETNAME)
.setKey(FILENAME)
.setMimeType("text/html")
.setAcl("public_read")
.addUserMetadata("myfield1", "my field value");
AppEngineFile writableFile =
fileService.createNewGSFile(optionsBuilder.build());
// Open a channel to write to it
boolean lock = false;
FileWriteChannel writeChannel =
fileService.openWriteChannel(writableFile, lock);
// Different standard Java ways of writing to the channel
// are possible. Here we use a PrintWriter:
PrintWriter out = new PrintWriter(Channels.newWriter(writeChannel, "UTF8"));
out.println("The woods are lovely dark and deep.");
out.println("But I have promises to keep.");
// Close without finalizing and save the file path for writing later
out.close();
String path = writableFile.getFullPath();
// Write more to the file in a separate request:
writableFile = new AppEngineFile(path);
// Lock the file because we intend to finalize it and
// no one else should be able to edit it
lock = true;
writeChannel = fileService.openWriteChannel(writableFile, lock);
// This time we write to the channel directly
writeChannel.write(ByteBuffer.wrap
("And miles to go before I sleep.".getBytes()));
// Now finalize
writeChannel.closeFinally();
resp.getWriter().println("Done writing...");
// At this point, the file is visible in App Engine as:
// "/gs/BUCKETNAME/FILENAME"
// and to anybody on the Internet through Cloud Storage as:
// (http://storage.googleapis.com/BUCKETNAME/FILENAME)
// We can now read the file through the API:
String filename = "/gs/" + BUCKETNAME + "/" + FILENAME;
AppEngineFile readableFile = new AppEngineFile(filename);
FileReadChannel readChannel =
fileService.openReadChannel(readableFile, false);
// Again, different standard Java ways of reading from the channel.
BufferedReader reader =
new BufferedReader(Channels.newReader(readChannel, "UTF8"));
String line = reader.readLine();
resp.getWriter().println("READ:" + line);
// line = "The woods are lovely, dark, and deep."
readChannel.close();
seems its writing and then reading but when I check in the cloud storage area(using firefox) file hasnt been edited. any idea ?
Is this in the production or development environment?
When you're using the dev server writes to Google Storage are simulate and are not written to a real bucket.
There could be two problems.
You could forget to add permissions to your bucket. Check https://developers.google.com/appengine/docs/java/googlestorage/overview#Prerequisites how to do it.
You are trying to update the file. After writeChannel.closeFinally() the file becomes read-only. You can not change/update/append to it.

Writing Zip Files to GAE Blobstore

I'm using the Java API for reading and writing to the Google App Engine Blobstore.
I need to zip files directly into the Blobstore, meaning I have String objects which I want to be stored in the Blobstore when zipped.
My problem is that standard zipping methods are using OutputStream to write, while it seems that GAE doesn't provide one for writing to the Blobstore.
Is there a way to combine those APIs, or are there different APIs I can use (I haven't found such)?
If I am not wrong, you can try to use the Blobstore low level API. It offers a Java Channel (FileWriteChannel), so you could probably convert it to an OutputStream:
Channels.newOutputStream(channel)
And use that output stream with the java.util.zip.* classes you are currently using (here you have a related example that uses Java NIO to zip something to a Channel/OutputStream)
I have not tried it.
Here is one example to write content file and zip it and store it into blobstore:
AppEngineFile file = fileService.createNewBlobFile("application/zip","fileName.zip");
try {
FileWriteChannel writeChannel = fileService.openWriteChannel(file, lock);
//convert as outputstream
OutputStream blobOutputStream = Channels.newOutputStream(writeChannel);
ZipOutputStream zip = new ZipOutputStream(blobOutputStream);
zip.putNextEntry(new ZipEntry("fileNameTozip.txt"));
//read the content from your file or any context you want to get
final byte data[] = IOUtils.toByteArray(file1InputStream);
//write byte data[] to zip
zip.write(bytes);
zip.closeEntry();
zip.close();
// Now finalize
writeChannel.closeFinally();
} catch (IOException e) {
throw new RuntimeException(" Writing file into blobStore", e);
}
The other answer is using BlobStore api, but currently the recommended way is to use App Engine GCS client.
Here is what I use to zip multiple files in GCS :
public static void zipFiles(final GcsFilename targetZipFile,
Collection<GcsFilename> filesToZip) throws IOException {
final GcsFileOptions options = new GcsFileOptions.Builder()
.mimeType(MediaType.ZIP.toString()).build();
try (GcsOutputChannel outputChannel = gcsService.createOrReplace(targetZipFile, options);
OutputStream out = Channels.newOutputStream(outputChannel);
ZipOutputStream zip = new ZipOutputStream(out)) {
for (GcsFilename file : filesToZip) {
try (GcsInputChannel readChannel = gcsService.openPrefetchingReadChannel(file, 0, MB);
InputStream is = Channels.newInputStream(readChannel)) {
final GcsFileMetadata meta = gcsService.getMetadata(file);
if (meta == null) {
log.warn("{} NOT FOUND. Skipping.", file.toString());
continue;
}
final ZipEntry entry = new ZipEntry(file.getObjectName());
zip.putNextEntry(entry);
ByteStreams.copy(is, zip);
zip.closeEntry();
}
zip.flush();
}
}

Resources