Arvato Systems implemented an automatic Extract-Transform-Load (ETL) pipeline with S3 events, Lambda and DynamoDB. Uploading a data chunk from the original Oracle database triggers the transformation process and enables massive parallelism. In several steps, the raw format is con-verted into the various target formats using transformation templates. The results are then im-ported into Redshift for data warehousing and a DynamoDB for machine learning purposes. The entire setup is scripted as infra-structure-as-code via CloudFormation and integrated into a CI/CD pipeline, including unit testing, deployment into a test environment, and finally into a production environment. With AWS Sa-gemaker and API Gateway, machine learning experts could easily integrate the S3 and DynamoDB data to create and train a model and host an endpoint for its use. An API gateway with a lambda backend provides API access to the endpoint.