Throughout my career, I’ve seen many evolutions of how we, as data engineers, “ship” pipelines to production. Some of the common methods are through:
- CICD Pipelines in Gitlab/Github 
- Emailing the script to the DBA to run (No one will ever admit this though 😆) 
- Saving bash jobs to a shared drive and using cron to run them (or windows task scheduler) 
- Airflow+S… 
Keep reading with a 7-day free trial
Subscribe to High Performance DE Substack to keep reading this post and get 7 days of free access to the full post archives.


