Oracle Databaseの既存バージョンの10gや11gOracle Zero Data Loss Recovery Applianceの登場で、ますます重要な機能となってきたOracle Recovery Managerについて、OTN人気連載シリーズ「しばちょう先生の試して納得!DBAへの道」の執筆者が語ります。RMANバックアップの運用例から、高速増分バックアップの内部動作とチューニング方法まで、出し惜しみなく解説します。
This document compares Apache Kafka and AWS Kinesis for message streaming. It outlines that Kafka is an open source publish-subscribe messaging system designed as a distributed commit log, while Kinesis provides streaming data services. It also notes some key differences like Kafka typically handling over 8000 messages/second while Kinesis can handle under 100 messages/second.
This document discusses the application of PostgreSQL in a large social infrastructure project involving smart meter management. It describes three main missions: (1) loading 10 million datasets within 10 minutes, (2) saving data for 24 months, and (3) stabilizing performance for large scale SELECT statements. Various optimizations are discussed to achieve these missions, including data modeling, performance tuning, reducing data size, and controlling execution plans. The results showed that all three missions were successfully completed by applying PostgreSQL expertise and customizing it for the large-scale requirements of the project.
Oracle Databaseの既存バージョンの10gや11gOracle Zero Data Loss Recovery Applianceの登場で、ますます重要な機能となってきたOracle Recovery Managerについて、OTN人気連載シリーズ「しばちょう先生の試して納得!DBAへの道」の執筆者が語ります。RMANバックアップの運用例から、高速増分バックアップの内部動作とチューニング方法まで、出し惜しみなく解説します。
This document compares Apache Kafka and AWS Kinesis for message streaming. It outlines that Kafka is an open source publish-subscribe messaging system designed as a distributed commit log, while Kinesis provides streaming data services. It also notes some key differences like Kafka typically handling over 8000 messages/second while Kinesis can handle under 100 messages/second.
This document discusses the application of PostgreSQL in a large social infrastructure project involving smart meter management. It describes three main missions: (1) loading 10 million datasets within 10 minutes, (2) saving data for 24 months, and (3) stabilizing performance for large scale SELECT statements. Various optimizations are discussed to achieve these missions, including data modeling, performance tuning, reducing data size, and controlling execution plans. The results showed that all three missions were successfully completed by applying PostgreSQL expertise and customizing it for the large-scale requirements of the project.
The Future of Analytics, Data Integration and BI on Big Data PlatformsMark Rittman
The document discusses the future of analytics, data integration, and business intelligence (BI) on big data platforms like Hadoop. It covers how BI has evolved from old-school data warehousing to enterprise BI tools to utilizing big data platforms. New technologies like Impala, Kudu, and dataflow pipelines have made Hadoop fast and suitable for analytics. Machine learning can be used for automatic schema discovery. Emerging open-source BI tools and platforms, along with notebooks, bring new approaches to BI. Hadoop has become the default platform and future for analytics.