Coder Social home page Coder Social logo

aws-glue-blueprint-libs's Introduction

aws-glue-blueprint-libs

This repository provides a Python library to build and test AWS Glue Custom Blueprints locally. It also provides sample blueprints addressing common use-cases in ETL.

Sample Blueprints

  • Crawling Amazon S3 locations: This blueprint crawls multiple Amazon S3 locations to add metadata tables to the Data Catalog.

  • Importing Amazon S3 data into a DynamoDB table: This blueprint imports data from Amazon S3 into a DynamoDB table.

  • Conversion: This blueprint converts input files in various standard file formats into Apache Parquet format, which is optimized for analytic workloads.

  • Converting character encoding: This blueprint converts your non-UTF files into UTF encoded files.

  • Compaction: This blueprint creates a job that compacts input files into larger chunks based on desired file size.

  • Partitioning: This blueprint creates a partitioning job that places output files into partitions based on specific partition keys.

  • Importing an AWS Glue table into a Lake Formation governed table: This blueprint imports a Glue Catalog table into a Lake Formation governed table.

  • Creating table definitions from Glue Custom Connection: This blueprint accesses data stores using Glue Custom Connectors, read the records, and populate the table definitions on Glue Data Catalog based on the record schema.

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

aws-glue-blueprint-libs's People

Contributors

amazon-auto avatar hocanint-amzn avatar keerthicha avatar lmbo-2020 avatar moomindani avatar skasai5296 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-glue-blueprint-libs's Issues

RecrawlPolicy not in Crawler constructor arguments

It seems that it is not possible to choose the RecrawlPolicy when instantiating a Crawler Object although it is stated in the documentation that all properties defined in the Crawler API can be provided to the constructor (https://docs.aws.amazon.com/glue/latest/dg/developing-blueprints-code-classes.html#developing-blueprints-code-crawlerclass).

Are there any plans to add this feature in future updates?

ex:
Crawler(
Name="crawler",
Role=iam_role,
RecrawlPolicy={"RecrawlBehavior": "CRAWL_NEW_FOLDERS_ONLY"},
Targets=targets,
)

ModuleNotFoundError: No module named 'pyspark'

After step 9 in Tutorial getting below error and workflow not created.

Error message:
Unknown error executing layout generator function ModuleNotFoundError: No module named 'pyspark'

Actions cannot contain more than two crawlers

I seem to only be able to crawl more than 2 crawlers if I run the sample crawl-s3-locations script which uses the DependsOn method. Once I remove the method, it is limited to two S3 buckets.
However, this error also occurs when I combine the crawl-s3-locations script with a job that converts file types. There's no further explanation to this and it seems as if I have to split crawlers and jobs into two different blueprints. Can anyone explain what this error depends on?

start workflow with On EventBridge trigger

To the best of my knowledge, there is no way to add an event bridge to trigger the workflow developed via a blueprint.
It would be nice if the event bridge resource could be previsioned here but also think that might be asking for too much. I think maybe just the trigger node that enables the workflow to be triggered with a blueprint rather than me needing to swoop it out after the fact.

Is there a way to add custom Tags to Blueprint workflow Triggers?

Hi All,

Workflows and Jobs created through the Blueprint can take in a Tags argument with a key:value dictionary and custom tags could be passed to them. But there seems to be no way to add custom tags to the triggers auto-generated in the form of the starting trigger or by dependency triggers, although they all seem to have the BlueprintName as a default Tag.

Is it possible to add a capability to assign custom Tags to the triggers as well? I'm having issues clearing up resources created by a previous blueprint run, as I'm able to clean up Jobs/Crawlers/Workflows based on the custom Tags but since Triggers don't have this ability, there's no programmatic way to delete triggers and re-running Blueprint run clashes with an existing trigger name.

If there's any other workaround to this problem, like a cascade delete of a Glue Workflow, please do suggest!

Thank you!

AWS Glue Start-Blueprint-Run running into timeout issues with increased number of jobs

Describe the bug
Using AWS-CLI (Version: aws-cli/2.8.6 Python/3.9.11 Windows/10 exe/AMD64 prompt/off), starting a Glue Blueprint Run works fine when the number of objects generated inside the workflow (Triggers/Glue Jobs) is under 30-40 objects in total. But when there's more objects being generated inside the glue workflow, the Blueprint Run seems to be timing out and gets stuck in RUNNING state.

Expected Behavior
We expect the Blueprint Run to spin up the workflow with all the number of jobs as needed. This is a sample example of the workflow where each row consists of 2 jobs with a trigger in the middle for each task, and there could be N number of tasks like this:

image

Current Behavior
These number of rows of tasks when under 8-10 tasks, the blueprint run is successful and doesn't time out but when it's more the Blueprint Run is stuck in RUNNING state and we never get the workflow generated.

Reproduction Steps
This is the AWS CLI command we're using right now:
aws glue start-blueprint-run --blueprint-name BLUEPRINT_NAME --role-arn IAMRoleARN --parameters "file://FILE_PATH.json" --region us-east-1 --profile test-naga --cli-connect-timeout 900 --cli-read-timeout 900

The JSON object takes in a collection of table names and loops over them and the layout file creates the workflow as shown in the image above. It's not easy to reproduce this with the exact same example but maybe one of the samples here https://github.com/awslabs/aws-glue-blueprint-libs/tree/master/samples could be used for a higher number of jobs/objects created through the Blueprint Run

Possible Solution
I'm wondering if it's related to the --cli-connect-timeout and --cli-read-timeout issue as the default value is 60 seconds and it seems like the Blueprint Run tries to spin up all the resources in that time but if there are more objects to spin up and it crosses this time, the whole process times out and gets stuck in the RUNNING state without doing anything.

We also tried setting these values to 0 and still the same issue. The number of objects it spins up when timing out seems to be random across each runs.

CLI version used
2.8.6

Environment details (OS name and version, etc.)
Windows 10

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.