IBM Cloud Object Storage HMAC Credentials - object-storage

In following along with I have tried multiple times to create my HMAC creadentials for usage in my application, however the credentials created do not contain cos_hmac_keys.
This should be simple enough, I added
to the inline JSON config for my credential creation, but still am having no luck. Any insight here would be great.

I had a same issue and here's my suggestion.
I'm wondering if your object-storage instance is newly-created or created long time ago.If it's old, then why don't you try to create a new one , as, in my case, I failed in my old instance and successfully created in my newly-created instance.( Each has a little bit different UI/Looks/IConslike this


Putting file to S3 right after it's created

I have two machines with different Java applications that both run on Linux and use a common Windows share folder. One app is triggering another to generate a specific file (e.g. image/pdf). Then the first app tries to upload the generated file to S3. The problem is I sometimes get this: The Content-MD5 you specified did not match what we received.
OR this:
com.amazonaws.AmazonClientException: Data read has a different length than the expected: dataLength=247898; expectedLength=262062; includeSkipped=false; in.getClass()=class com.amazonaws.internal.ResettableInputStream; markedSupported=true; marked=0; resetSinceLastMarked=false; markCount=1; resetCount=0
All the processes are happening synchronously, one after another (i have also checked the logs which show no concurrent activity). Also I am not setting the md5 hash or the content length by myself, aws-sdk handles it by itself.
So my guess is that the generating application has written a file and returned but in fact it is still being written by the OS in background and that is why the first app is getting an incomplete file.
I would really appreciate suggestions on how to handle such situations. Maybe there is a way to detect if the file is not currently being modified by the OS?
I was experiencing AmazonS3Exception: The Content-MD5 you specified did not match what we received. I finally solved it by addressing the first item on the list below, not terribly obvious.
Possible Solutions For Anyone Else:
Make sure not to use the same ObjectMetadata object across multiple putObject calls.
Consider disabling ChunkedEncoding. client.setS3ClientOptions(S3ClientOptions.builder().disableChunkedEncoding().build())
Make sure the file isn't being edited while it's being uploaded.

ColdFusion/RabbitMQ Fails on factory.newConnection()

I'm attempting to connect my Coldfusion app to CloudAMQP's RabbitMQ service. I've been able to create the java object, but when I attempt to create a newConnection(), it fails miserably. I'm thinking it may have something to do with my config? Here's how I've mapped AMQP's settings (right) to my code (left). I'm basically following Luis Majano's example code on github (lmajano/messaging-polyglot) which he refers to in his video Down the RabbitMQ Hole with ColdFusion
NOTE: I will rotate the password after posting, so these credentials won't work. Seems like the prudent thing to do :)
When I run this code I'm able to create a factory successfully. The writeDump(factory) code outputs the following.
NOTE: the newConnection() method
Now, when I attempt to actually create a connection factory.newConnection() like so...
it fails! Here is the result of the dump within the catch writeDump(err)
Any idea why it would be failing on the factory.newConnection() method call?
Set the vhost:
The vhost is the same as the username for shared cloudamqp instances.

Oracle-UCM service CHECKIN_UNIVERSAL is throwing errors when trying to checkin an existing file

I'm working on Java code that checks whether a file exists in the system and whether it's checked out. After these checks it calls the CHECKIN_UNIVERSAL service. This is where it stops. Checking in a new file works just fine, but it's the checking in of an existing file that's giving errors.
The specific error displayed (without making modifications to my original code) is !cscheckinitemexists. A bunch of googling turned up the solution to clear the data binder, yet then it comes up with the error that it cannot retrieve or use the security token.
Here's the code I use to clear and retrieve the data binder:
m_binder.setEnvironment(new IdcProperties(SharedObjects.getSecureEnvironment()));
What does the rest of your code look like? You can link to a Gist.
Generally, I have run into this due to data pollution (as you stated).
Is there a reason you are using m_binder instead of creating a brand new DataBinder?
After looking at your gist, you are using m_binder (the DataBinder from the service) to execute CHECKIN_UNIVERSAL. Don't do this. Use a separate DataBinder (as you did for the DOC_INFO_BY_NAME service call).
Either use requestBinder or a new DataBinder.
Another way to avoid this issue is to simply not look for the checkout. CHECKIN_UNIVERSAL supports a flag that checks out a content item if it's not already checked out.
Add the flag "isForceCheckout" to your binder, with a value of "1".

How to get new token after calling reconnect api of Intuit? [duplicate]

This question already has an answer here:
How to call API (Oauth 1.0)?
2 answers
I wrote the code to reconnect the Intuit. My code is:
I can't be sure about it is working or not. What is the best way to test it? I went through think page of intuit:
but I don't have so much idea regarding to it.
Another Main issue i am facing it that, How to get new token again?? As defined by intuit checklist, We should not call OauthFlow again.
Added part of question:
I know that, Playground helping me lot.
But I am searching a mechanism to get new accessToken and accessTokenSecret after I call reconnect api. I am calling api as follows:
IAPlatformClient client = new IAPlatformClient();
This is working fine, because if i try to use old tokens from database it throws exception as defined by Intuit.
And This code i have to run using scheduler mechanism. After calling reconnect api, I have to update my existing keys on database But I can't get those newly generated keys. So, Please suggest me the mechanism which returns new accessToken and accessTokenSecret.
I have tried this:
Map<String, String> requesttokenmap=client.getRequestTokenAndSecret(INTUIT_QB_OAUTH_CONSUMER_KEY,INTUIT_QB_OAUTH_CONSUMER_SECRET);
final Map<String, String> oauthAccessTokenMap =
client.getOAuthAccessToken(verifierCode, requesttokenmap.get(IntuitSSOConstants.REQUEST_TOKEN),
But I will not have verifierCode code, which have to pass as a parameter in last block of code.
So, how can I get accessToken and accessTokenSecret?
Please refer this SO thread.
QuickBook Online Reconnect & expire Issue
For your first Qts, Please check -
Please find below steps to get OAuth tokens using which you can make API call against your QBO account.
If you create an app in appcenter, you'll get consumerKey and consumerSecret.
Using the above two tokens, you can generate accessToken and accessSecret from the OAuthPlayground.
PN - After completing C2QB(OAuth) flow, you should use 'App Menu API Test.' option which will show you accessToken and accessSecret.
All the above steps are mentioned in this -

Maintaining state between multiple entries to same GWT module

A while ago i asked this question
GWT Multiple html pages and navigation
Although i was first satisfied with the simplest solution i used, which is similar to the one suggested here also Problem with multiple entry Points in the same module. But a major drawback which i am running into is that, The data which i get and build from the first run of the onModuleLoad() is not available in the susequent run of the onModuleLoad() for the same EntryPoint class. For eg. lets say i create a instance of a class LoginSessionInformation on the first run,now how do i access this instance when the onModuleLoad is called the second time.Thanks
Edit: This is purely for client side, as i am taking the login information in the first run and constructing class LoginSessionInformation, and planning to pass that to server the second time.
You can pass the state in the URL token and use the history handler.
You have not said where all your state is.. if its on the server, your in the same session so its all there. If its a client side problem, then one place to store stuff is in a cookie or to probe the server again if ncessary. Many times its okay to leave stuffon the server and do your work there on the data rather than passing back lots of data to the client and doing some processing there..