Scripts I've written that I never want to lose. Mostly bash.
Unless otherwise stated, all content is protected under the GNU General Public License (GPL) version 3 https://www.gnu.org/licenses/quick-guide-gplv3.html
Scripts I've written that I never want to lose. Mostly bash.
Scripts I've written that I never want to lose. Mostly bash.
Unless otherwise stated, all content is protected under the GNU General Public License (GPL) version 3 https://www.gnu.org/licenses/quick-guide-gplv3.html
Test images with >100 segments
Or confirm impossible. Imports limited to 40G, so...
Occasionally in cloud-image-transfer.sh, when attempting to delete the source container, a 409 error (conflict) is detected and the script bails. I suspect the cause of this is an incomplete deletion of the objects within that container, which was probably caused by an incomplete container listing returned by the API. Consider the 3x60sec listing strategy used in the transfer, and/or just use the same container listing as from the transfer. Maybe I'm already doing that? Regardless, this needs further investigation.
Not 100% certain how I want to work around this yet. Gonna have to test for 401s on every curl, and either switch to API key auth instead of token, or prompt for input whenever I need the new token. I'm not a huge fan of either option. User input means a long, tedious process is suddenly no longer unattended, and API key auth means that bash shell histories are likely to contain un-expiring admin-level credentials.
For cloud-image-transfer.sh, stop using a dynamic manifest - it has potential for false-positive name matches. Granted, the odds are astronomically small, but it's possible with dynamic, and not possible at all with static.
Vault names have changed from "MossoCloudFS_xxxx" to just the tenant ID. Somehow I need to detect the correct vault name by region. Sadly, this is probably going to require JSON-awareness within bash, which means I need to roll in my bash-based JSON-to-SNMP-OID compiler, or add a dependency. Hopefully not.
In cloud-image-transfer.sh, there's this error message:
Error: You won't be able to import this image at the destination,
because it was taken of a server with >40G OS disk. You'll need
to build a Standard NextGen server from this image at the source
region, resize it to <=2G RAM (<=40G disk), then take a new image
and transfer that new image instead.
Note: In order to resize down, you may need to first manually set
the min_disk and min_ram values on this image to <= 40 disk and
1024 RAM.
The min_disk value is set when the image is taken, based on the total VDI-chain size. Make this more descriptive, and suggest fixes (resize + cat /dev/zero)
Copy vm_mode image metadata, just like we're already doing with org.openstack__1__os_version
No way to estimate this until after the export task is complete, but once that does finish, check the total size of the Cloud Files container. If >40G, then the import's gonna fail and there's no sense transferring the image to Cloud Files in the destination region. Print a meaningful error message, and suggest a workaround (resize + cat /dev/zero)
In cloud-image-transfer.sh, see if we can detect random 503 errors from the API and recover gracefully instead of bailing. As-is, current behaviour is the following:
2015-01-14 03:43:32 Waiting for completion - will check every 60 seconds. Error: Unable to query task details - maybe API is unavailable? Maybe your API token just expired? Script will attempt to retry. Response data from API was as follows: Response code: 503
Docs say this means the API was "unavailable":
http://docs.rackspace.com/images/api/v2/ci-devguide/content/GET_getTask_tasks__taskID__Image_Task_Calls.html#GET_getTask_tasks__taskID__Image_Task_Calls-Request
Came across your script late last night while trying to clone a VM across regions to help with the mass reboots. Kept getting 401 when I run it & can't seem to figure out why. Here's how I'm running it:
./cloud-image-transfer.sh -a <account_api_key> -t <account_num> -1 -r ord -i <image_uuid> -R dfw
where the <> placeholders are the actual values from my account.
This is the output:
Attempting to authenticate against Identity API with source account info.
Error: Unable to authenticate against API using SRCAUTHTOKEN and SRCTENANTID
provided. Raw response data from API was the following:
Response code: 401
401
----------------------------------------
Script exited prematurely.
You may need to manually delete the following:
----------------------------------------
I figure the 401 means I wasn't authenticating properly but I know for sure I'm using the right API key and account number because I can use it manually interacting with the API directly via curl. If you could let me know what I'm doing wrong, would greatly appreciate it for next time.
Hey Andrew, just wanted to let you know that line 87 has a typo and is missing a closing quote.
This is for cloud-image-transfer.sh
https://github.com/StafDehat/scripts/blob/master/cloud-image-transfer.sh
Brilliant little script. Thanks for making it GPL.
For cloud-image-transfer.sh.
We'd use /dev/tcp instead. I'm not 100% on compatibility with non-bash, non-linux unix systems though.
if ( echo > /dev/tcp/google.com/80 ) &>/dev/null; then
echo open
else
echo closed
fi
Not sure if I handle this. Test it.
Add -servername option to test-ciphers.sh, for SNI-enabled hosts.
Jose ran into an issue where an attempt to download a 128M segment created an empty file, and the script retried indefinitely. If there was an error from the API, I didn't catch it appropriately. Even if a completely unexpected issue occurred, we should at least bail after 10 tries instead of retrying indefinitely.
Add optional command-line argument "-n", so user has the ability to name the image at destination something different than the original image.
HI @StafDehat! Hi , I have checked your repo and would suggest you to migrate your scripts to https://sparrowhub.org - a automation scripts repository. A benefits you will gain:
For examples of existed scripts - take a look at http://sparrowhub.org , let me know and I help on migration process.
ps. If it does not sound interesting, just close the issue.
Thanks . Alexey Melezhik , the author of SparrowHub.
Currently this script downloads 1G, saving it locally, then uploads that 1G to the destination and deletes the local 1G copy. This works, but it might be better to do an I/O stream, passing the curl output directly into the next curl's input. Look into it when you get a chance.
Re: 1de6aa2
"head -n -1 is not supported on some systems (e.g. MacOS). Any idea of a good substitution?"
httpd fullstatus fails. Use apache2ctl.
Also, tmpreaper not tmpwatch.
Those 2x named pipes - make 'em use the tmpfile command to find a name - don't hardcode it.
Detect & bail if container doesn't exist, at the least.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.