Bigdam is a planet-scale data ingestion pipeline designed for large-scale data ingestion. It addresses issues with the traditional pipeline such as imperfectqueue throughput limitations, latency in queries from event collectors, difficulty maintaining event collector code, many small temporary and imported files. The redesigned pipeline includes Bigdam-Gateway for HTTP endpoints, Bigdam-Pool for distributed buffer storage, Bigdam-Scheduler to schedule import tasks, Bigdam-Queue as a high throughput queue, and Bigdam-Import for data conversion and import. Consistency is ensured through at-least-once design and deduplication is performed at the end of the pipeline for simplicity and reliability. Components are designed to scale out horizontally.