Anant Corporation Blog: Our research, knowledge, thoughts, and recommendations about building and managing online business platforms.
In this blog post, we set up Apache Spark and Apache Airflow using a Docker container, and in the end, we ran and scheduled Spark jobs using Airflow which is deployed on a Docker container. This is very important because, with Docker images, we are able to solve problems we encountered in development. For example, problems that relate to a different environment, dependencies issues e.t.c, thereby leading to fast development, and deployment to production.
Continue readingIn this blog, we run and schedule Spark jobs on Apache Airflow, we built a Spark job that extracts data from an API, transforms the result JSON data, and loads the data into an S3 bucket. We have decided to run Spark and Airflow locally, and we have configured Spark and Airflow to talk together using the Airflow UI. To ensure smooth communication between Spark and S3 bucket, we have set up an S3 access point, and a dedicated AWS IAM role so that data is sent to the S3 bucket directly from our Spark application.
Continue readingIn Data Engineer’s Lunch #57, we discuss StreamSets and how it can be used for data engineering! The live recording of Cassandra Lunch, which includes a more in-depth discussion and a demo, is embedded below in case you were not able to attend live. Subscribe to our YouTube Channel to keep up to date and watch Data Engineer’s Lunches live at 12 PM EST on Mondays!
Continue readingIn Data Engineer’s Lunch #59: Spark Tasks and Distribution, we discussed using a machine learning example to investigate the way that Spark distributes work between nodes. . The live recording of the Data Engineer’s Lunch, which includes a more in-depth discussion, is also embedded below in case you were not able to attend live. If you would like to attend a Data Engineer’s Lunch live, it is hosted every Monday at noon EST. Register here now!
Continue readingIn Data Engineer’s Lunch #54: dbt and Spark, we discussed the data build tool, a tool for managing data transformations with config files rather than code. We connected it to Apache Spark and used it to perform transformations. The live recording of the Data Engineer’s Lunch, which includes a more in-depth discussion, is also embedded below in case you were not able to attend live. If you would like to attend a Data Engineer’s Lunch live, it is hosted every Monday at noon EST. Register here now!
Continue readingSubscribe to our monthly newsletter below and never miss the latest Cassandra and data engineering news!