Reading a file that will be created in future when deploying to heroku - linux

I have deployed a jar on heroku that opens a connection to a random port. When this jar is opened, it creates a config.txt file and keeps receiving request. This config.txt file has a randomly generated username and password.
The whole jar is executed using a shell script file. This file is called by heroku Procfile.
How do i tell shell script to read the content of config.txt file. This config.txt file will be automatially created by the jar.

Related

How not to spawn a new dyno when used with heroku run bash command

So, I have an application deployed to heroku. In heroku i have a Procfile with the following content:
web: env CONF_listener__port=$PORT bash "./startServer.sh"
When the above command is executed then the shell scripts launches fine. This shell script opens a jar file and the jar file creates a config.txt file. This config.txt file contains username and password that is needed to run some part of the application. Also, The jar is basically a server which won't close until the app is restarted.
The problem i am having is the config.txt file created above is placed on a different dyno. This is because above bash ".startServer.sh" command will spawn a new dyno.
Now, I cannot access it. So, I was wondering if there is any way through which i could grab that config.txt file or may be tell heroku not to spawn a new dyno when bash command is used.
How does the above bash command spawn a new dyno when the dyno count for my app is only 1.

Rsyng doesn't run from cron, but manually

I have a simple script for backing up files from my server. It does the following:
Joins the server with SSH
Creates a MySQL dump file
Tar some folders
Exits
Starts rsnapshot to download the folder where the tar.gz and sql file are located
sshs back to the server just to clean up files
Exits
On the top of my crontab I've given the following
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
SHELL=/bin/bash
However, the scripts sometimes starts, sometimes not. Also Rsnapshot sais for a few of my servers when running from cron:
/usr/bin/rsnapshot -c /backup/configs/myserver.com.conf daily: ERROR: /usr/bin/rsync returned 255 while processing user#myserver.com:/home/user/serverdump/
Do you have any idea for both the issues?

Linux file transfer between server automatically when a file is created

In my work, I use 2 Linux servers.
The first one is used for web-crawling and create it as a text file.
The other one is used for analyzing the text file from the web crawler.
So the issue is that when a text file is created on web-crawling server,
it needs to be transferred automatically on the analysis server.
I've used shell programming guides referring some tips,
and set up the crawling server to be able to execute the scp command without requiring the password (By using ssh-keygen command, Add ssh-key on authorized_keys file located in /root/.ssh directory)
But I cannot figure out how to programmatically transfer the file when it is created.
My job position is just data analyze (Not programming)
So, the lack of background programming knowledge is my big concern
If there is a way to trigger the scp to copy the file when it is created, please let me know.
You could use inotifywait to monitor the directory and run a command every time a file is created in the directory. In this case, you would fire off the scp command. IF you have it set up to not prompt for the password, you should be all set.
inotifywait -mrq -e CREATE --format %w%f /path/to/dir | while read FILE; do scp "$FILE"analysis_server:/path/on/anaylsis/server/; done
You can find out more about inotifywait at http://techarena51.com/index.php/inotify-tools-example/

Run expect script in current directory

I wrote a sftp expect script to upload and download files.
I put the script file in a folder. And double click the script to run the script to log in the remote server, but every time my script will log in the server in home folder not in the folder that where the script is.
#!/usr/bin/env expect
set login "username"
set addr "server.com"
set pw "mypassword"
set timeout -1
sleep 1
spawn sftp $login#$addr
expect "Password:" {send "$pw\r"}
sleep 1
interact
For example, I put this script on /Desktop and if I want to upload some files to my server from /Desktop in my local machine, I still have to cd into /Desktop and than run this script, if I just double click to execute the script it will log into my server from the ~ or /root whatever the default directory is. I want to log in my server from the directory where the script is.
Is there any way that I can find the location of a file? Or I need perform a searching to locate the file?
You might want
cd [file dirname $argv0]
to change into the directory where the script lives.

sftp file age: reading files transferred from sftp that aren't complete yet

I have a linux server that receives data files via sftp. These files contain data that is immediately imported into an application for use. The directory which the files are sent to is constantly read by another process looking for the new files to process.
The problem I am having is that the files are getting read before they are completely transferred. Is there a way to hide the files before they have transferred?
One thought I had is by leveraging the .filepart concept that many sftp clients use to rename files before they are complete. I don't have control of the clients though, so is there a way to do this on the server side?
Or is there another way to do this by permissions or such?
We have solved a similar problem by creating a directory on the same file-system that the files will be read from by the clients, and use inotifywait.
You sftp to the staging directory and have inotifywait watch that staging directory.
Once inotify sees the "FILE_CLOSE" event for any received file you simply "mv" the file to the directory the client reads from.
#!/bin/bash
inotifywait -m -e close --format "%f\n" /path/to/tmp | while read newfile
do
mv /path/to/tmp/"$newfile" ~/real
done

Resources