이 강연에서는 NoSQL 데이터베이스 서비스인 Amazon DynamoDB 서비스를 간단하게 소개하고, 새롭게 발표된 신규 시간 기반 (TTL) 데이터 관리 기능 및 인메모리 캐시 신규 기능 (Amazon DynamoDB Accelerator) 등에 대해 함께 설명해 드릴 예정입니다.
연사: Pranav Nambiar, 아마존 웹서비스 Amazon DynamoDB 총괄 프로덕트 매니저
Saiba como Amazon Redshift, o nosso dataware house totalmente gerenciados, pode ajudá-lo de forma rápida e rentável analisar todos os seus dados utilizando suas ferramentas de BI. Também será abordado introdução ao serviço, o qual utiliza MPP, arquitetura scale-out e armazenamento de forma colunar.
AWS는 고객의 기존 데이터베이스를 쉽게 클라우드로 이전할 수 있도록 데이터베이스 전환을 돕는 AWS Database Migration Service와 AWS Schema Conversion Tool을 제공합니다. 이 강연에서는 이 도구들을 활용하여 오라클 데이터베이스를 Amazon Aurora 데이터베이스로 이전하는 방법에 대하여 실습을 통하여 학습할 예정입니다.
연사: John Winford, 아마존 웹서비스 시니어 테크니컬 매니저
김상필, 아마존 웹서비스 솔루션즈 아키텍트
Nesta sessão faremos uma demonstração de controle e defesa de tráfego aéreo utilizando processamento em tempo real. Trataremos das boas práticas para ingestão, armazenamento, processamento e visualização de dados através de serviços da AWS como Kinesis, DynamoDB, Lambda, Redshift, Quicksight e Amazon Machine Learning.
Path to the future #4 - Ingestão, processamento e análise de dados em tempo realAmazon Web Services LATAM
Nesta sessão faremos uma demonstração de controle e defesa de tráfego aéreo utilizando processamento em tempo real.
Trataremos das boas práticas para ingestão, armazenamento, processamento e visualização de dados através de serviços da AWS como Kinesis, DynamoDB, Lambda, Redshift, Quicksight e Amazon Machine Learning.
Amazon Redshift는 속도가 빠른 페타바이트 규모의 완전관리형 데이터 웨어하우스로, 간편하고 비용 효율적으로 모든 데이터를 기존 비즈니스 인텔리전스 도구를 사용하여 분석할 수 있게 해줍니다. 이 강연에서는 대량 병렬처리를 가능하게 하는 RedShift의 분산 처리구조를 살펴보고, 다양한 데이터 소스 및 포맷으로 부터의 데이터 통합 및 로드를 위한 모범 사례에 대하여 실습을 통하여 학습할 예정입니다.
연사: 김상필, 아마존 웹서비스 솔루션즈 아키텍트
이제 빅데이터란 개념은 익숙한 것이 되었지만 이를 비지니스에 적용하고 최대의 효과를 얻는 방법에 대한 고찰은 여전히 필요합니다. 소중한 데이터를 쉽게 저장 및 분석하고 시각화하는 것은 비즈니스에 대한 통찰을 얻기 위한 중요한 과정입니다.
이 강연에서는 AWS Elastic MapReduce, Amazon Redshift, Amazon Kinesis 등 AWS가 제공하는 다양한 데이터 분석 도구를 활용해 보다 간편하고 빠른 빅데이터 분석 서비스를 구축하는 방법에 대해 소개합니다.
This document summarizes Amazon DynamoDB features and new capabilities presented at AWS re:Invent 2017. It includes 3 case studies:
1) How Samsung migrated from Cassandra to DynamoDB, improving performance and reducing costs by 50%+.
2) New DynamoDB capabilities like global tables, encryption at rest, on-demand backups were evaluated.
3) Best practices for migration including decreasing threads and item batches to control load are discussed.
이 강연에서는 AWS Big Data 분석 아키텍처 모범 사례를 살펴보고 표준 SQL을 사용해 Amazon S3에 저장된 데이터를 간편하게 분석할 수 있는 대화식 쿼리 서비스인 Amazon Athena의 특징과 최신 기능들에 대하여 고객 사례와 함께 소개드립니다.
연사: Greg Khairallah, 아마존 웹서비스 Amazon Big Data 및 Athena 총괄 사업 개발 매니저
This document discusses various options for migrating data and workloads between on-premises environments and AWS. It covers tools like AWS Database Migration Service for database migration, VM Import/Export for virtual machine migration, copying files between S3 buckets, and using services like Route53 for transitioning traffic during a migration. Specific techniques discussed include copying AMIs, EBS snapshots, security groups, and database parameters between regions; using the AWS Schema Conversion Tool; and DynamoDB cross-region replication.
Amazon Redshift는 속도가 빠른 페타바이트 규모의 완전관리형 데이터 웨어하우스로, 간편하고 비용 효율적으로 모든 데이터를 기존 비즈니스 인텔리전스 도구를 사용하여 분석할 수 있게 해줍니다. 이 강연에서는 RedShift를 활용해 데이터 웨어하우스를 구축하고 데이터를 분석할 때의 모범사례과 다양한 고려사항에 대해 알아보고, Amazon S3에 있는 엑사바이트 규모의 데이터에 대해 복잡한 쿼리를 실행할 직접 수행할 수 있는 RedShift Spectrum을 실제로 사용할 때 고려사항에 대해 함께 다룰 예정입니다.
연사: 정영준, 아마존 웹서비스 솔루션즈 아키텍트
Takuya Tachibana shared best practices for using AWS in rural areas with limited budgets. He discussed using t2 instances for low-cost compute and offloading CPU tasks to Lambda. He also highlighted using pay-per-use services like Cloudflare, S3, Lambda and Spot Instances to build systems for statistical analysis, content delivery, and big data analytics without large fixed costs. By leveraging these techniques, he was able to help local governments and businesses implement projects cost effectively.
Amazon Aurora is a relational database service that is compatible with MySQL and PostgreSQL databases. It is fully managed by AWS and provides faster performance than MySQL databases at lower costs. Aurora provides high availability across three availability zones and automatic failover. It is easy to migrate existing MySQL databases to Aurora using AWS database migration services. Aurora is optimized for the cloud and leverages other AWS services like DynamoDB and S3 for storage. It has a simple pricing model based on the instance size and storage used.
클라우드에서 보안은 매우 중요한 요소로서 클라우드 내에서 실행중인 애플리케이션에 대한 보안 인증 정책과 접근 제어 및 변경 사항 추적 및 알림 등의 기능이 필수적입니다. 본 온라인 세미나에서는 AWS 클라우드의 보안에 대한 기초 지식과 아울러 서비스 규모의 확장에 따른 AWS 아키텍처 변화에 맞는 보안 서비스 활용 방법과 모범 사례 등을 소개합니다.
Premiers pas et bonnes pratiques sur Amazon AWS présenté par Carlos Condé Au Xebia Cloud Day 2012.
La vidéo de la présentation est disponible ici : http://vimeo.com/44228168
Le Xebia Cloud Day 2012 est une conférence gratuite dédiée au Cloud Computing focalisée sur l'écosystème Java.
http://blog.xebia.fr/22-mai-2012-cloud-day-chez-xebia/
This document provides an overview of an AWS event. It includes details about the AWS business including $16B in annual revenue and over 135,000 active customers. It discusses the breadth of AWS services and tools available, positioning AWS as a leader in cloud infrastructure. The document outlines how AWS gives customers superpowers with super sonic speed and pace of innovation. It provides examples of how customers are using AWS services to transform their businesses.
AWS Startup Day Bangalore: Being Well-Architected in the CloudAdrian Hornsby
The document discusses the AWS Well-Architected Framework which provides guidance to help build secure, high-performing, resilient, and efficient infrastructure for applications. It covers the five pillars of the framework - security, reliability, performance efficiency, cost optimization, and operational excellence. For each pillar, it discusses design principles, best practices, services, and examples to evaluate architectures against AWS recommendations.
Data Streaming with Apache Kafka & MongoDBconfluent
Explore the use-cases and architecture for Apache Kafka, and how it integrates with MongoDB to build sophisticated data-driven applications that exploit new sources of data.
Amazon web services | Cloud Computing |Rahul SInghRahul Singh
Cloud computing is the present as well as future for the IT industry due to its cost and flexibility , the slide shows various components of cloud computing
Keynote: Open Source für den geschäftskritischen EinsatzMariaDB plc
The document summarizes MariaDB's 2017 roadshow, including what they are doing and where they are going. It discusses how MariaDB is building an easy to use, deploy, and extend database and how it aims to be the new leader in the changing database market. It then outlines the enterprise capabilities MariaDB provides and why organizations should consider MariaDB due to benefits like annual subscriptions, cloud infrastructure, and reduced costs.
Open Source für den geschäftskritischen EinsatzMariaDB plc
The document summarizes MariaDB's Roadshow Bonn 2017 event. It discusses MariaDB's goals of building an easy to use, extensible, and deployable database. It outlines how MariaDB provides enterprise features like high availability and performance while also enabling open source innovation through community collaboration. Examples of large customers and their MariaDB implementations are provided, showing MariaDB's adoption across industries. Resources for learning more about MariaDB products and getting started with MariaDB are listed at the end.
If you could not be one of the 60,000+ in attendance at Amazon AWS re:Invent, the yearly Amazon Cloud Conference, get the 411 on what major announcements that were made in Las Vegas. This presentation covers new AWS services & products, exciting announcements, and updated features.
Esta sesión está enfocada en mostrar cómo las empresas pueden optimizar sus recursos a través de las soluciones basadas en la nube, poniendo foco en la diferenciación, la innovación y reducción de riesgos en la infraestructura.
Por Ricardo Rentería de Amazon
This document provides an overview of a workshop on cloud native, capacity, performance and cost optimization tools and techniques. It begins with introducing the difference between a presentation and workshop. It then discusses introducing attendees, presenting on various cloud native topics like migration paths and operations tools, and benchmarking Cassandra performance at scale across AWS regions. The goal is to explore cloud native techniques while discussing specific problems attendees face.
1. The document discusses adapting data strategies for the cloud, where time to market has replaced cost as the primary driver of cloud adoption.
2. It outlines key considerations for choosing a cloud data platform, including deployment flexibility, reducing complexity, agility, resiliency, scalability, cost, and security.
3. The document summarizes how MongoDB can provide a flexible cloud data strategy through offerings like MongoDB Atlas that offer deployment flexibility across public, private, and hybrid clouds without vendor lock-in.
Leapfrog into Serverless - a Deloitte-Amtrak Case Study | Serverless Confere...Gary Arora
This talk was delivered at the Serverless Conference in New York City in 2017. Deloitte and Amtrak built a Serverless Cloud-Native solution on AWS for real-time operational datastore and near real-time reporting data mart that modernized Amtrak's legacy systems & applications. With Serverless solutions, we are able leapfrog over several rungs of computing evolution.
Gary Arora is a Cloud Solutions Architect at Deloitte Consulting, specializing on Azure & AWS.
This document discusses AWS Database Migration Service (DMS) and how it can be used to automate database migrations between on-premises and cloud databases. It provides an overview of DMS features like minimal downtime, cost effectiveness, reliability and ongoing replication. It also lists the supported source and target databases for homogeneous and heterogeneous migrations. The document demonstrates how Terraform can be used to automate and manage the DMS migration process. It describes the AWS Schema Conversion Tool and how Terraform is helpful for infrastructure as code with DMS.
이 강연에서는 AWS Big Data 분석 아키텍처 모범 사례를 살펴보고 표준 SQL을 사용해 Amazon S3에 저장된 데이터를 간편하게 분석할 수 있는 대화식 쿼리 서비스인 Amazon Athena의 특징과 최신 기능들에 대하여 고객 사례와 함께 소개드립니다.
연사: Greg Khairallah, 아마존 웹서비스 Amazon Big Data 및 Athena 총괄 사업 개발 매니저
This document discusses various options for migrating data and workloads between on-premises environments and AWS. It covers tools like AWS Database Migration Service for database migration, VM Import/Export for virtual machine migration, copying files between S3 buckets, and using services like Route53 for transitioning traffic during a migration. Specific techniques discussed include copying AMIs, EBS snapshots, security groups, and database parameters between regions; using the AWS Schema Conversion Tool; and DynamoDB cross-region replication.
Amazon Redshift는 속도가 빠른 페타바이트 규모의 완전관리형 데이터 웨어하우스로, 간편하고 비용 효율적으로 모든 데이터를 기존 비즈니스 인텔리전스 도구를 사용하여 분석할 수 있게 해줍니다. 이 강연에서는 RedShift를 활용해 데이터 웨어하우스를 구축하고 데이터를 분석할 때의 모범사례과 다양한 고려사항에 대해 알아보고, Amazon S3에 있는 엑사바이트 규모의 데이터에 대해 복잡한 쿼리를 실행할 직접 수행할 수 있는 RedShift Spectrum을 실제로 사용할 때 고려사항에 대해 함께 다룰 예정입니다.
연사: 정영준, 아마존 웹서비스 솔루션즈 아키텍트
Takuya Tachibana shared best practices for using AWS in rural areas with limited budgets. He discussed using t2 instances for low-cost compute and offloading CPU tasks to Lambda. He also highlighted using pay-per-use services like Cloudflare, S3, Lambda and Spot Instances to build systems for statistical analysis, content delivery, and big data analytics without large fixed costs. By leveraging these techniques, he was able to help local governments and businesses implement projects cost effectively.
Amazon Aurora is a relational database service that is compatible with MySQL and PostgreSQL databases. It is fully managed by AWS and provides faster performance than MySQL databases at lower costs. Aurora provides high availability across three availability zones and automatic failover. It is easy to migrate existing MySQL databases to Aurora using AWS database migration services. Aurora is optimized for the cloud and leverages other AWS services like DynamoDB and S3 for storage. It has a simple pricing model based on the instance size and storage used.
클라우드에서 보안은 매우 중요한 요소로서 클라우드 내에서 실행중인 애플리케이션에 대한 보안 인증 정책과 접근 제어 및 변경 사항 추적 및 알림 등의 기능이 필수적입니다. 본 온라인 세미나에서는 AWS 클라우드의 보안에 대한 기초 지식과 아울러 서비스 규모의 확장에 따른 AWS 아키텍처 변화에 맞는 보안 서비스 활용 방법과 모범 사례 등을 소개합니다.
Premiers pas et bonnes pratiques sur Amazon AWS présenté par Carlos Condé Au Xebia Cloud Day 2012.
La vidéo de la présentation est disponible ici : http://vimeo.com/44228168
Le Xebia Cloud Day 2012 est une conférence gratuite dédiée au Cloud Computing focalisée sur l'écosystème Java.
http://blog.xebia.fr/22-mai-2012-cloud-day-chez-xebia/
This document provides an overview of an AWS event. It includes details about the AWS business including $16B in annual revenue and over 135,000 active customers. It discusses the breadth of AWS services and tools available, positioning AWS as a leader in cloud infrastructure. The document outlines how AWS gives customers superpowers with super sonic speed and pace of innovation. It provides examples of how customers are using AWS services to transform their businesses.
AWS Startup Day Bangalore: Being Well-Architected in the CloudAdrian Hornsby
The document discusses the AWS Well-Architected Framework which provides guidance to help build secure, high-performing, resilient, and efficient infrastructure for applications. It covers the five pillars of the framework - security, reliability, performance efficiency, cost optimization, and operational excellence. For each pillar, it discusses design principles, best practices, services, and examples to evaluate architectures against AWS recommendations.
Data Streaming with Apache Kafka & MongoDBconfluent
Explore the use-cases and architecture for Apache Kafka, and how it integrates with MongoDB to build sophisticated data-driven applications that exploit new sources of data.
Amazon web services | Cloud Computing |Rahul SInghRahul Singh
Cloud computing is the present as well as future for the IT industry due to its cost and flexibility , the slide shows various components of cloud computing
Keynote: Open Source für den geschäftskritischen EinsatzMariaDB plc
The document summarizes MariaDB's 2017 roadshow, including what they are doing and where they are going. It discusses how MariaDB is building an easy to use, deploy, and extend database and how it aims to be the new leader in the changing database market. It then outlines the enterprise capabilities MariaDB provides and why organizations should consider MariaDB due to benefits like annual subscriptions, cloud infrastructure, and reduced costs.
Open Source für den geschäftskritischen EinsatzMariaDB plc
The document summarizes MariaDB's Roadshow Bonn 2017 event. It discusses MariaDB's goals of building an easy to use, extensible, and deployable database. It outlines how MariaDB provides enterprise features like high availability and performance while also enabling open source innovation through community collaboration. Examples of large customers and their MariaDB implementations are provided, showing MariaDB's adoption across industries. Resources for learning more about MariaDB products and getting started with MariaDB are listed at the end.
If you could not be one of the 60,000+ in attendance at Amazon AWS re:Invent, the yearly Amazon Cloud Conference, get the 411 on what major announcements that were made in Las Vegas. This presentation covers new AWS services & products, exciting announcements, and updated features.
Esta sesión está enfocada en mostrar cómo las empresas pueden optimizar sus recursos a través de las soluciones basadas en la nube, poniendo foco en la diferenciación, la innovación y reducción de riesgos en la infraestructura.
Por Ricardo Rentería de Amazon
This document provides an overview of a workshop on cloud native, capacity, performance and cost optimization tools and techniques. It begins with introducing the difference between a presentation and workshop. It then discusses introducing attendees, presenting on various cloud native topics like migration paths and operations tools, and benchmarking Cassandra performance at scale across AWS regions. The goal is to explore cloud native techniques while discussing specific problems attendees face.
1. The document discusses adapting data strategies for the cloud, where time to market has replaced cost as the primary driver of cloud adoption.
2. It outlines key considerations for choosing a cloud data platform, including deployment flexibility, reducing complexity, agility, resiliency, scalability, cost, and security.
3. The document summarizes how MongoDB can provide a flexible cloud data strategy through offerings like MongoDB Atlas that offer deployment flexibility across public, private, and hybrid clouds without vendor lock-in.
Leapfrog into Serverless - a Deloitte-Amtrak Case Study | Serverless Confere...Gary Arora
This talk was delivered at the Serverless Conference in New York City in 2017. Deloitte and Amtrak built a Serverless Cloud-Native solution on AWS for real-time operational datastore and near real-time reporting data mart that modernized Amtrak's legacy systems & applications. With Serverless solutions, we are able leapfrog over several rungs of computing evolution.
Gary Arora is a Cloud Solutions Architect at Deloitte Consulting, specializing on Azure & AWS.
This document discusses AWS Database Migration Service (DMS) and how it can be used to automate database migrations between on-premises and cloud databases. It provides an overview of DMS features like minimal downtime, cost effectiveness, reliability and ongoing replication. It also lists the supported source and target databases for homogeneous and heterogeneous migrations. The document demonstrates how Terraform can be used to automate and manage the DMS migration process. It describes the AWS Schema Conversion Tool and how Terraform is helpful for infrastructure as code with DMS.
This document provides information about various AWS services for machine learning, analytics, databases, and data lakes. It discusses Amazon SageMaker as a fully managed service that allows developers and data scientists to build, train, and deploy machine learning models at scale. It also mentions Amazon Redshift as a data warehousing service for complex queries on large datasets and Amazon S3 as the most popular choice for data lakes with unmatched scalability, availability, and security capabilities.
클라우드에서 Database를 백업하고 복구하는 방법에 대해 설명드립니다. AWS Backup을 사용하여 전체백업/복구 부터 PITR(Point in Time Recovery)백업, 그리고 멀티 어카운트, 멀티 리전등 다양한 데이터 보호 방법을 소개합니다(데모 포함). 또한 self-managed DB 의 데이터 저장소로 Amazon FSx for NetApp ONTAP 스토리지 서비스를 사용할 경우 얼마나 신속하게 데이터를 복구/복제 할수 있는지 살펴 봅니다.
기업은 이벤트나 신제품 출시 등으로 예기치 못한 트래픽 급증 시 데이터베이스 과부하, 서비스 지연 및 중단 등의 문제를 겪곤 합니다. Aurora 오토스케일링은 프로비저닝 시간으로 인해 실시간 대응이 어렵고, 트래픽 대응을 위한 과잉 프로비저닝이 발생합니다. 이러한 문제를 해결하기 위해 프로비저닝된 Amazon Aurora 클러스터와 Aurora Serverless v2(ASV2) 인스턴스를 결합하는 Amazon Aurora 혼합 구성 클러스터 아키텍처와 고해상도 지표를 기반으로 하는 커스텀 오토스케일링 솔루션을 소개합니다.
Amazon Aurora 클러스터를 초당 수백만 건의 쓰기 트랜잭션으로 확장하고 페타바이트 규모의 데이터를 관리할 수 있으며, 사용자 지정 애플리케이션 로직을 생성하거나 여러 데이터베이스를 관리할 필요 없이 Aurora에서 관계형 데이터베이스 워크로드를 단일 Aurora 라이터 인스턴스의 한도 이상으로 확장할 수 있는 Amazon Aurora Limitless Database를 소개합니다.
Amazon Aurora MySQL 호환 버전 2(MySQL 5.7 호환성 지원)는 2024년 10월 31일에 표준 지원이 종료될 예정입니다. 이로 인해 Aurora MySQL의 메이저 버전 업그레이드를 검토하고 계시다면, Amazon Blue/Green Deployments는 운영 환경에 영향을 주지 않고 메이저 버전 업그레이드를 할 수 있는 최적의 솔루션입니다. 본 세션에서는 Blue/Green Deployments를 통한 Aurora MySQL의 메이저 버전 업그레이드를 실습합니다.
Amazon DocumentDB(MongoDB와 호환됨)는 빠르고 안정적이며 완전 관리형 데이터베이스 서비스입니다. Amazon DocumentDB를 사용하면 클라우드에서 MongoDB 호환 데이터베이스를 쉽게 설치, 운영 및 규모를 조정할 수 있습니다. Amazon DocumentDB를 사용하면 MongoDB에서 사용하는 것과 동일한 애플리케이션 코드를 실행하고 동일한 드라이버와 도구를 사용하는 것을 실습합니다.
사례로 알아보는 Database Migration Service : 데이터베이스 및 데이터 이관, 통합, 분리, 분석의 도구 - 발표자: ...Amazon Web Services Korea
Database Migration Service(DMS)는 RDBMS 이외에도 다양한 데이터베이스 이관을 지원합니다. 실제 고객사 사례를 통해 DMS가 데이터베이스 이관, 통합, 분리를 수행하는 데 어떻게 활용되는지 알아보고, 동시에 데이터 분석을 위한 데이터 수집(Data Ingest)에도 어떤 역할을 하는지 살펴보겠습니다.
Amazon Elasticache - Fully managed, Redis & Memcached Compatible Service (Lev...Amazon Web Services Korea
Amazon ElastiCache는 Redis 및 MemCached와 호환되는 완전관리형 서비스로서 현대적 애플리케이션의 성능을 최적의 비용으로 실시간으로 개선해 줍니다. ElastiCache의 Best Practice를 통해 최적의 성능과 서비스 최적화 방법에 대해 알아봅니다.
Internal Architecture of Amazon Aurora (Level 400) - 발표자: 정달영, APAC RDS Speci...Amazon Web Services Korea
ccAmazon Aurora 데이터베이스는 클라우드용으로 구축된 관계형 데이터베이스입니다. Aurora는 상용 데이터베이스의 성능과 가용성, 그리고 오픈소스 데이터베이스의 단순성과 비용 효율성을 모두 제공합니다. 이 세션은 Aurora의 고급 사용자들을 위한 세션으로써 Aurora의 내부 구조와 성능 최적화에 대해 알아봅니다.
[Keynote] 슬기로운 AWS 데이터베이스 선택하기 - 발표자: 강민석, Korea Database SA Manager, WWSO, A...Amazon Web Services Korea
오랫동안 관계형 데이터베이스가 가장 많이 사용되었으며 거의 모든 애플리케이션에서 널리 사용되었습니다. 따라서 애플리케이션 아키텍처에서 데이터베이스를 선택하기가 더 쉬웠지만, 구축할 수 있는 애플리케이션의 유형이 제한적이었습니다. 관계형 데이터베이스는 스위스 군용 칼과 같아서 많은 일을 할 수 있지만 특정 업무에는 완벽하게 적합하지는 않습니다. 클라우드 컴퓨팅의 등장으로 경제적인 방식으로 더욱 탄력적이고 확장 가능한 애플리케이션을 구축할 수 있게 되면서 기술적으로 가능한 일이 달라졌습니다. 이러한 변화는 전용 데이터베이스의 부상으로 이어졌습니다. 개발자는 더 이상 기본 관계형 데이터베이스를 사용할 필요가 없습니다. 개발자는 애플리케이션의 요구 사항을 신중하게 고려하고 이러한 요구 사항에 맞는 데이터베이스를 선택할 수 있습니다.
Demystify Streaming on AWS - 발표자: 이종혁, Sr Analytics Specialist, WWSO, AWS :::...Amazon Web Services Korea
실시간 분석은 AWS 고객의 사용 사례가 점점 늘어나고 있습니다. 이 세션에 참여하여 스트리밍 데이터 기술이 어떻게 데이터를 즉시 분석하고, 시스템 간에 데이터를 실시간으로 이동하고, 실행 가능한 통찰력을 더 빠르게 얻을 수 있는지 알아보십시오. 일반적인 스트리밍 데이터 사용 사례, 비즈니스에서 실시간 분석을 쉽게 활성화하는 단계, AWS가 Amazon Kinesis와 같은 AWS 스트리밍 데이터 서비스를 사용하도록 지원하는 방법을 다룹니다.
Amazon EMR - Enhancements on Cost/Performance, Serverless - 발표자: 김기영, Sr Anal...Amazon Web Services Korea
Amazon EMR은 Apache Spark, Hive, Presto, Trino, HBase 및 Flink와 같은 오픈 소스 프레임워크를 사용하여 분석 애플리케이션을 쉽게 실행할 수 있는 관리형 서비스를 제공합니다. Spark 및 Presto용 Amazon EMR 런타임에는 오픈 소스 Apache Spark 및 Presto에 비해 두 배 이상의 성능 향상을 제공하는 최적화 기능이 포함되어 있습니다. Amazon EMR Serverless는 Amazon EMR의 새로운 배포 옵션이지만 데이터 엔지니어와 분석가는 클라우드에서 페타바이트 규모의 데이터 분석을 쉽고 비용 효율적으로 실행할 수 있습니다. 이 세션에 참여하여 개념, 설계 패턴, 라이브 데모를 사용하여 Amazon EMR/EMR 서버리스를 살펴보고 Spark 및 Hive 워크로드, Amazon EMR 스튜디오 및 Amazon SageMaker Studio와의 Amazon EMR 통합을 실행하는 것이 얼마나 쉬운지 알아보십시오.
Amazon OpenSearch - Use Cases, Security/Observability, Serverless and Enhance...Amazon Web Services Korea
로그 및 지표 데이터를 쉽게 가져오고, OpenSearch 검색 API를 사용하고, OpenSearch 대시보드를 사용하여 시각화를 구축하는 등 Amazon OpenSearch의 새로운 기능과 기능에 대해 자세히 알아보십시오. 애플리케이션 문제를 디버깅할 수 있는 OpenSearch의 Observability 기능에 대해 알아보세요. Amazon OpenSearch Service를 통해 인프라 관리에 대해 걱정하지 않고 검색 또는 모니터링 문제에 집중할 수 있는 방법을 알아보십시오.
Enabling Agility with Data Governance - 발표자: 김성연, Analytics Specialist, WWSO,...Amazon Web Services Korea
데이터 거버넌스는 전체 프로세스에서 데이터를 관리하여 데이터의 정확성과 완전성을 보장하고 필요한 사람들이 데이터에 액세스할 수 있도록 하는 프로세스입니다. 이 세션에 참여하여 AWS가 어떻게 분석 서비스 전반에서 데이터 준비 및 통합부터 데이터 액세스, 데이터 품질 및 메타데이터 관리에 이르기까지 포괄적인 데이터 거버넌스를 제공하는지 알아보십시오. AWS에서의 스트리밍에 대해 자세히 알아보십시오.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
Designing Low-Latency Systems with Rust and ScyllaDB: An Architectural Deep DiveScyllaDB
Want to learn practical tips for designing systems that can scale efficiently without compromising speed?
Join us for a workshop where we’ll address these challenges head-on and explore how to architect low-latency systems using Rust. During this free interactive workshop oriented for developers, engineers, and architects, we’ll cover how Rust’s unique language features and the Tokio async runtime enable high-performance application development.
As you explore key principles of designing low-latency systems with Rust, you will learn how to:
- Create and compile a real-world app with Rust
- Connect the application to ScyllaDB (NoSQL data store)
- Negotiate tradeoffs related to data modeling and querying
- Manage and monitor the database for consistently low latencies
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
TrustArc Webinar: Consumer Expectations vs Corporate Realities on Data Broker...TrustArc
Most consumers believe they’re making informed decisions about their personal data—adjusting privacy settings, blocking trackers, and opting out where they can. However, our new research reveals that while awareness is high, taking meaningful action is still lacking. On the corporate side, many organizations report strong policies for managing third-party data and consumer consent yet fall short when it comes to consistency, accountability and transparency.
This session will explore the research findings from TrustArc’s Privacy Pulse Survey, examining consumer attitudes toward personal data collection and practical suggestions for corporate practices around purchasing third-party data.
Attendees will learn:
- Consumer awareness around data brokers and what consumers are doing to limit data collection
- How businesses assess third-party vendors and their consent management operations
- Where business preparedness needs improvement
- What these trends mean for the future of privacy governance and public trust
This discussion is essential for privacy, risk, and compliance professionals who want to ground their strategies in current data and prepare for what’s next in the privacy landscape.
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
Role of Data Annotation Services in AI-Powered ManufacturingAndrew Leo
From predictive maintenance to robotic automation, AI is driving the future of manufacturing. But without high-quality annotated data, even the smartest models fall short.
Discover how data annotation services are powering accuracy, safety, and efficiency in AI-driven manufacturing systems.
Precision in data labeling = Precision on the production floor.
TrsLabs - Fintech Product & Business ConsultingTrs Labs
Hybrid Growth Mandate Model with TrsLabs
Strategic Investments, Inorganic Growth, Business Model Pivoting are critical activities that business don't do/change everyday. In cases like this, it may benefit your business to choose a temporary external consultant.
An unbiased plan driven by clearcut deliverables, market dynamics and without the influence of your internal office equations empower business leaders to make right choices.
Getting things done within a budget within a timeframe is key to Growing Business - No matter whether you are a start-up or a big company
Talk to us & Unlock the competitive advantage
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
What is Model Context Protocol(MCP) - The new technology for communication bw...Vishnu Singh Chundawat
The MCP (Model Context Protocol) is a framework designed to manage context and interaction within complex systems. This SlideShare presentation will provide a detailed overview of the MCP Model, its applications, and how it plays a crucial role in improving communication and decision-making in distributed systems. We will explore the key concepts behind the protocol, including the importance of context, data management, and how this model enhances system adaptability and responsiveness. Ideal for software developers, system architects, and IT professionals, this presentation will offer valuable insights into how the MCP Model can streamline workflows, improve efficiency, and create more intuitive systems for a wide range of use cases.
Generative Artificial Intelligence (GenAI) in BusinessDr. Tathagat Varma
My talk for the Indian School of Business (ISB) Emerging Leaders Program Cohort 9. In this talk, I discussed key issues around adoption of GenAI in business - benefits, opportunities and limitations. I also discussed how my research on Theory of Cognitive Chasms helps address some of these issues
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell
With expertise in data architecture, performance tracking, and revenue forecasting, Andrew Marnell plays a vital role in aligning business strategies with data insights. Andrew Marnell’s ability to lead cross-functional teams ensures businesses achieve sustainable growth and operational excellence.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
2. Social Media
IOT Sensor Data
Bio-metrics
Click-stream data
User-activity Logs
Satellite Data
RFID
Maps
Online Content
Weather data
Email
Documents
Audio/
Video
Images
Traditional
Tabular Data
3. Social Media
IOT Sensor Data
Bio-metrics
Click-stream data
User-activity Logs
Satellite Data
RFID
Maps
Online Content
Weather data
Email
Documents
Audio/
Video
Images
Traditional
Tabular Data
Structured
Data
Semi-structured/
Un-structured Data
Data
Volume
Performance
Requirements
4. How to optimize for scale, performance, cost?
Scale Performance Cost
Zero Worries !!!
5. Amazon’s Journey
Dec ‘04:
Suffers
outage
Oct ‘07:
Dynamo paper
published
Jan ‘12:
DynamoDB General
Availability
Q3 ‘16:
Leader in Gartner MQ,
Forrester Wave
Today:
Tier-0 service
powering most of
Amazon
7. Amazon DynamoDB
NoSQL Database (Document & Key-value store)
Fully managed
Fast – response times in microseconds
Massive and seamless scalability
Highly available
Low Cost
10. Writes
Reads
>200% increase from baseline
>300% increase from baseline
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
• Millions of tables; Several tables with >100TB
• 10’s of PBs of data; Trillions of requests/month
11. Availability & Data Protection
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
12. North America
US East – N. Virginia
US East - Ohio
US West – Oregon
US West – San
Francisco
Canada - Central
AWS GovCloud (US)
European
Union
Dublin
London
Frankfurt
Asia
Pacific
Tokyo
Singapore
Sydney
Mumbai
Seoul
South
America
Sao Paulo
China
• Available in 16 regions worldwide
• Built-in replication across 3 Availability Zones
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
13. WRITES
Replicated continuously to 3 AZ’s
Persisted to disk (custom SSD)
READS
Strongly or eventually consistent
No latency trade-off
Designed to
support
99.99%
of availability
Built for high
Durability
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
14. 1 2
Multi-AZ & Cross-Region Replication
Features
Key Benefits
• High availability with transparent replication at
no extra cost
• Disaster recovery on complete single region
failure
• Scale-out by directing read traffic to replicas
• Built-In 3-way data replication to three
Availability Zones (AZ) within an AWS region
• Replicate to other regions with open source,
fully-extensible, library
1
2
Available Today
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
16. DynamoDB: consistent performance at scale
ConsistentSingle-Digit Millisecond Latency
Requests(millions)
Latency(milliseconds)
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
17. Introducing DAX
What if we could go from milliseconds to microseconds?
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
18. DynamoDB
DynamoDB Accelerator(DAX)
Your Applications
New!
• Fully managed: handle all of the upgrades, patching,
and software management
• Flexible: Configure DAX for one table or many
• Highly available: fault tolerant, replication across
multi-AZs within a region
• Scalable: scales-out to any workload with up to 10
read replicas
• Manageability: fully integrated AWS service: Amazon
CloudWatch, Tagging for DynamoDB, AWS Console
• Security: Amazon VPC, AWS IAM, AWS CloudTrail,
AWS Organizations
Features
DynamoDB Accelerator (DAX) In Public Preview
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
19. DynamoDB
DynamoDB Accelerator(DAX)
Your Applications
New!
Key Benefits
• Fast performance: Microseconds response
times at millions of reads/sec from single DAX
cluster
• Ease of use: DynamoDB API compatible -
requires minimal code change for existing
applications, simplifying developer experience
• Lower costs: Reduce provisioned read
capacity for DynamoDB tables for tables with
hot data
DynamoDB Accelerator (DAX) In Public Preview
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
21. Fine-grained Access control (FGAC) at a table, item or attribute level
with AWS IAM
Client-side encryption library with optional AWS KMS integration;
encrypt select or all attributes
Log DynamoDB configuration, table setup changes and API calls
with AWS CloudTrail
Monitor performance and trigger alarms with AWS CloudWatch
Available Today
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
22. • VPC: Access DynamoDB via secure
Amazon VPC endpoint
• Access Control: restrict table access for
each VPC endpoint with a unique IAM
role and permissions
Features
Key Benefits
• Turn off access from public Internet
gateways enhancing privacy and
security
• Fast, secure data transfer between
Amazon VPC and DynamoDB
VPC Endpoints for DynamoDB (VPC-E) In Public Preview
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
24. DB hosted on premise
DynamoDB
Fully Managed service, removes management overhead
Power, HVAC, net
Rack & stack
Server maintenance
OS patches
DB s/w patches
Database backups
App optimization
High availability
DB s/w installs
OS installation
you
Scaling
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
25. Pay-as-you-grow
Features
Key Benefits
Available Today
RCU WCU Storage
• Fully managed, serverless, database service
• Non-expiring free tier with 25 RCU, 25 WCU, 25GB storage
and 2.5M reads for Streams
• Provision read and write capacity for base tables and GSI
independently
• Pay only for provisioned read/write capacity and actual
storage consumed
• Increase provisioned capacity in granular single unit
increments
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
26. Time-to-Live (TTL)
ID Name Size Expiry
1234 A 100 1456702305
2222 B 240 1456702400
3423 C 150 1459207905
TTL Value
(Epoch format)
TTL Attribute
• Automatic: Deletes items from a table based on
expiration timestamp
• Customizable: User-defined TTL attribute in epoch
time format
• Audit Log: TTL activity recorded in DynamoDB
Streams
Features
Key Benefits
• Reduce costs: Delete items no longer needed
• Performance: Optimize application performance by
controlling table size growth
• Extensible: Trigger custom workflows with
DynamoDB Streams and Lambda
Available Today
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
27. Tagging
• Track costs: AWS bills broken down by tags
in detailed monthly bills and Cost Explorer
• Flexible: Add customizable tags to both
tables and indexes
Features
Key Benefits
• Transparency: Know exactly how much
your DynamoDB tables and indexes cost
• Consistent: Report of spend across AWS
services
Available Today
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
28. Developer Platform & Tools
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
29. Working with AWS Ecosystem
IoT
S3
Kinesis
EMR
Redshift
Data Pipeline
Mobile HubLambda
Elasticsearch
SNS
CloudWatch
CloudTrail
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
30. RFID
RFID
RFID Chips
DynamoDB DynamoDB Streams
US East (N. Virginia)
US West (SFO)
Cross-region replication
Real-time notification
using DynamoDB
Triggers
Text Search
RFID
Online
Indexing
Document/Key-value
store with support1
2
4
6
5
7
3
Connect to EMR /
Redshift for further
analysis
8
Scalability Performance SecurityAvailability & Data Protection Manageability & TCO Dev Platform & Tools
33. • Millions of
requests per
second from single
table
• Unlimited items
and storage
• Consistent, single
digit millisecond
latency
• Optimized for
analytics
workloads with
native indexing
• Microsecond
response times
with DynamoDB
Accelerator
(DAX)*
• Control user
access at items
and attributes level
• SOC, PCI, ISO,
FedRAMP (Mod &
High), HIPAA BAA
• Monitor with
CloudWatch
metrics & logging
with CloudTrail
• Client-side
encryption library
• Secure, private
VPC endpoints*
• Designed for
99.99% high
availability (HA)
• Built-in replication
across 3 zones
• Fully-managed
• Perpetual free tier
• Pay-as-you-grow
for capacity and
storage
independently
• Track table level
spending with
Tagging
• Purge data
automatically
(Time To Live)
• Event-driven
programming with
Triggers & Lambda
• Advanced
analytics with EMR
& Amazon
Redshift
• Full-text query
support with
Amazon
Elasticsearch
Service
• Real-time stream
processing with
Amazon Kinesis
Scalability Performance Security Availability &
Data Protection
Manageability &
TCO
Dev Platform &
Tools
* In public preview