Тенденции в развитии сетей операторов связиCisco Russia
Сетевая индустрия переживает одну из самых крупных технологических революций, инновационными трендами которой являются программно управляемые сети SDN (Software Defined Network) и технологии виртуализации сетевых функций NFV (Network Function Virtualization). Большое количество одновременных инициатив в этом сегменте сложно отследить и проанализировать даже опытным специалистам. В данной сессии мы постараемся систематизировать последние технологические тренды в этом сегменте, основываясь на анализе работ международных институтов и организаций стандартизации сетевых технологий, таких как IETF, ETSI, MEF, ONF и др. Понимание и правильный выбор SDN технологий на начальном этапе внедрения, позволит значительно повысить экономическую эффективность от их внедрения в каждом конкретном проекте. Все больше внимания в индустрии уделяется развитию и стандартизации не только технологий реализации сетевых функций, а также платформ оркестрации с открытыми интерфейсами для реализации сервисов. Это необходимо, чтобы увеличить гибкость инфраструктуры и скорость ее адаптации к новым требованиям бизнеса.
Сессия будет интересна сетевым и ИТ специалистам, руководителям отделов развития и эксплуатации сетей операторов связи, партнерам Cisco, интересующимся применением технологий SDN для предоставления современных телекоммуникационных и облачных услуг.
Принципы и подходы Cisco для автоматизации в сетях операторов связиCisco Russia
Запись вебинара: http://ciscoclub.ru/principy-i-podhody-cisco-dlya-avtomatizacii-v-setyah-operatorov-svyazi
В рамках данной концептуальной сессии освещается назначение и стандартизация протоколов Netconf/Restconf, а также их языка моделирования данных YANG для обеспечения современного подхода для управления конфигурациями устройств. Дополнительно описываются продукты компании Cisco Systems в которых используются методы обнаружения топологии сети (BGP-LS) и обеспечения маршрутов прохождения информационных потоков (PCEP). В завершающей части презентации будет представлена концепция новой расширяемой платформы для автоматизации эксплуатационных процессов на сетях операторов связи.
Запись вебинара: http://ciscoclub.ru/analitika-v-cod
Аналитика стала одним из ключевых направлений развития современных информационных технологий. В рамках данной сессии мы рассмотрим основные продукты Cisco для задач аналитики в ЦОД, включая Tetration Analytics и AppDynamics.
Новая эра управления и работы корпоративной сети с Cisco DNACisco Russia
Подробнее:
https://www.cisco.com/c/ru_ru/solutions/enterprise-networks/index.html
Программа:
- Сложности Цифровизации
- DNA ведет Сетевую Трансформацию
- День 0 Сетевая Автоматизация
- Автоматизация с политикой бизнес-целей
- День N Мониторинг и Аналитика
- Заключение
Организация тестовой лаборатории
Тестирование транспортной сети (L2-3, ixNetwork)
Нагрузочное тестирование трафиком приложений (L4-7, ixLoad)
Эмуляторы WAN (IXIA Anue)
Тестирование синхронизации в пакетных сетях
Тестирование сети доступа и ядра мобильного оператора
Тестирование устройств сетевой защиты (DDoS, атаки, botnet, malware)
Нагруpочное тестирование WiFi (радио и транспорт)
Apache spark its place within a big data stackJunjun Olympia
Spark is a fast, large-scale data processing engine that can be 10-100x faster than Hadoop MapReduce. It is commonly used to capture and extract data from various sources, transform the data by handling data quality issues and computing derived fields, and then store the data in files, databases, or data warehouses to enable querying, analysis, and visualization of the data. Spark provides a unified framework for these functions and is an essential part of the modern big data stack.
Тенденции в развитии сетей операторов связиCisco Russia
Сетевая индустрия переживает одну из самых крупных технологических революций, инновационными трендами которой являются программно управляемые сети SDN (Software Defined Network) и технологии виртуализации сетевых функций NFV (Network Function Virtualization). Большое количество одновременных инициатив в этом сегменте сложно отследить и проанализировать даже опытным специалистам. В данной сессии мы постараемся систематизировать последние технологические тренды в этом сегменте, основываясь на анализе работ международных институтов и организаций стандартизации сетевых технологий, таких как IETF, ETSI, MEF, ONF и др. Понимание и правильный выбор SDN технологий на начальном этапе внедрения, позволит значительно повысить экономическую эффективность от их внедрения в каждом конкретном проекте. Все больше внимания в индустрии уделяется развитию и стандартизации не только технологий реализации сетевых функций, а также платформ оркестрации с открытыми интерфейсами для реализации сервисов. Это необходимо, чтобы увеличить гибкость инфраструктуры и скорость ее адаптации к новым требованиям бизнеса.
Сессия будет интересна сетевым и ИТ специалистам, руководителям отделов развития и эксплуатации сетей операторов связи, партнерам Cisco, интересующимся применением технологий SDN для предоставления современных телекоммуникационных и облачных услуг.
Принципы и подходы Cisco для автоматизации в сетях операторов связиCisco Russia
Запись вебинара: http://ciscoclub.ru/principy-i-podhody-cisco-dlya-avtomatizacii-v-setyah-operatorov-svyazi
В рамках данной концептуальной сессии освещается назначение и стандартизация протоколов Netconf/Restconf, а также их языка моделирования данных YANG для обеспечения современного подхода для управления конфигурациями устройств. Дополнительно описываются продукты компании Cisco Systems в которых используются методы обнаружения топологии сети (BGP-LS) и обеспечения маршрутов прохождения информационных потоков (PCEP). В завершающей части презентации будет представлена концепция новой расширяемой платформы для автоматизации эксплуатационных процессов на сетях операторов связи.
Запись вебинара: http://ciscoclub.ru/analitika-v-cod
Аналитика стала одним из ключевых направлений развития современных информационных технологий. В рамках данной сессии мы рассмотрим основные продукты Cisco для задач аналитики в ЦОД, включая Tetration Analytics и AppDynamics.
Новая эра управления и работы корпоративной сети с Cisco DNACisco Russia
Подробнее:
https://www.cisco.com/c/ru_ru/solutions/enterprise-networks/index.html
Программа:
- Сложности Цифровизации
- DNA ведет Сетевую Трансформацию
- День 0 Сетевая Автоматизация
- Автоматизация с политикой бизнес-целей
- День N Мониторинг и Аналитика
- Заключение
Организация тестовой лаборатории
Тестирование транспортной сети (L2-3, ixNetwork)
Нагрузочное тестирование трафиком приложений (L4-7, ixLoad)
Эмуляторы WAN (IXIA Anue)
Тестирование синхронизации в пакетных сетях
Тестирование сети доступа и ядра мобильного оператора
Тестирование устройств сетевой защиты (DDoS, атаки, botnet, malware)
Нагруpочное тестирование WiFi (радио и транспорт)
Apache spark its place within a big data stackJunjun Olympia
Spark is a fast, large-scale data processing engine that can be 10-100x faster than Hadoop MapReduce. It is commonly used to capture and extract data from various sources, transform the data by handling data quality issues and computing derived fields, and then store the data in files, databases, or data warehouses to enable querying, analysis, and visualization of the data. Spark provides a unified framework for these functions and is an essential part of the modern big data stack.
The document discusses data migration between Oracle and MongoDB databases. It provides an introduction to the topic, reasons for migration between different database systems, key differences between relational and non-relational databases, and a demonstration of migrating data between Oracle and MongoDB. The document aims to help developers take advantage of both database types to maximize benefits for their organizations.
This document discusses key aspects of migrating a database from SQL Server to Oracle 11g. The major steps in a migration are analysis, migration, testing, and deployment. The migration process involves migrating the schema and objects, business logic, and client applications. Tools like Oracle Migration Workbench and Database Migration Verifier help automate the migration and validation of the migrated schema and data.
Oracle Database 12c includes several new features:
1) Online statistics gathering improves optimizer performance by gathering statistics for new objects during creation instead of requiring a full data scan later.
2) Invisible columns allow adding a column to a table without showing it in SELECT queries or the table definition unless explicitly specified.
3) Multiple indexes on the same column are now supported if they differ in characteristics like being unique/non-unique or using different index types.
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...Mark Rittman
Presentation from the Rittman Mead BI Forum 2015 masterclass, pt.2 of a two-part session that also covered creating the Discovery Lab. Goes through setting up Flume log + twitter feeds into CDH5 Hadoop using ODI12c Advanced Big Data Option, then looks at the use of OBIEE11g with Hive, Impala and Big Data SQL before finally using Oracle Big Data Discovery for faceted search and data mashup on-top of Hadoop
Data warehouse migration to oracle data integrator 11gMichael Rainey
Pacific Northwest National Laboratory migrated their data warehouse from SQL Server and Visual Basic to Oracle Data Integrator. They developed a SQL parsing tool and used the ODI SDK to programmatically build ODI objects from the SQL metadata, automating the migration of over 4,900 packages. This minimized implementation risks and allowed them to complete the migration in a fraction of the originally estimated 2-3 years. The automated approach reduced human errors and is now used for ongoing operations.
In-Memory Computing: How, Why? and common PatternsSrinath Perera
Traditionally, big data is mostly read from disks and processed. However, most big data systems are latency bound, which means often the CPU sits idle waiting for data to arrive. This problem is more prevalent with use cases like graph searches that need to randomly access different parts of datasets. In-memory computing proposes an alternative model where data is loaded or stored in-memory and processed instead of processing them from the disk. Although such designs cost more in terms of memory, sometimes resulting systems can have faster order of magnitudes (e.g. 1000X), which could lead to savings in the long run. With rapidly falling memory prices, this difference is reducing by the day. Furthermore, in-memory computing can enable use cases like ad hoc analysis over a large set of data that was not possible earlier. This talk will provide an overview of in-memory technology and discuss how WSO2 technologies like complex event processing that can be used to build in-memory solutions. It will also provide an overview of upcoming improvements in the WSO2 platform.
This document provides an overview of in-memory databases, summarizing different types including row stores, column stores, compressed column stores, and how specific databases like SQLite, Excel, Tableau, Qlik, MonetDB, SQL Server, Oracle, SAP Hana, MemSQL, and others approach in-memory storage. It also discusses hardware considerations like GPUs, FPGAs, and new memory technologies that could enhance in-memory database performance.
In-memory databases (IMDBs) store data primarily in RAM for faster access than disk-based databases. While an older concept, IMDBs have become more practical due to lower RAM costs, multi-core CPUs, and 64-bit systems allowing more memory. IMDBs have different architectures, data representations, indexing, and query processing optimized for memory versus disk. They also face challenges in providing durability without disk and scaling to very large data sizes.
Device to Intelligence, IOT and Big Data in OracleJunSeok Seo
The document discusses Internet of Things (IoT) and big data in the context of Oracle technologies. It provides examples of how Oracle solutions have helped companies in various industries like transportation, healthcare, manufacturing, and telecommunications manage IoT and big data. Specifically, it highlights how Oracle technologies allow for efficient processing, analysis and management of large volumes of data from IoT devices and sensor networks in real-time.
This document introduces Pig, an open source platform for analyzing large datasets that sits on top of Hadoop. It provides an example of using Pig Latin to find the top 5 most visited websites by users aged 18-25 from user and website data. Key points covered include who uses Pig, how it works, performance advantages over MapReduce, and upcoming new features. The document encourages learning more about Pig through online documentation and tutorials.
introduction to data processing using Hadoop and PigRicardo Varela
In this talk we make an introduction to data processing with big data and review the basic concepts in MapReduce programming with Hadoop. We also comment about the use of Pig to simplify the development of data processing applications
YDN Tuesdays are geek meetups organized the first Tuesday of each month by YDN in London
The document discusses a presentation about practical problem solving with Hadoop and Pig. It provides an agenda that covers introductions to Hadoop and Pig, including the Hadoop distributed file system, MapReduce, performance tuning, and examples. It discusses how Hadoop is used at Yahoo, including statistics on usage. It also provides examples of how Hadoop has been used for applications like log processing, search indexing, and machine learning.
HIVE: Data Warehousing & Analytics on HadoopZheng Shao
Hive is a data warehousing system built on Hadoop that allows users to query data using SQL. It addresses issues with using Hadoop for analytics like programmability and metadata. Hive uses a metastore to manage metadata and supports structured data types, SQL queries, and custom MapReduce scripts. At Facebook, Hive is used for analytics tasks like summarization, ad hoc analysis, and data mining on over 180TB of data processed daily across a Hadoop cluster.
Apache Hive provides SQL-like access to your stored data in Apache Hadoop. Apache HBase stores tabular data in Hadoop and supports update operations. The combination of these two capabilities is often desired, however, the current integration show limitations such as performance issues. In this talk, Enis Soztutar will present an overview of Hive and HBase and discuss new updates/improvements from the community on the integration of these two projects. Various techniques used to reduce data exchange and improve efficiency will also be provided.
Hadoop, Pig, and Twitter (NoSQL East 2009)Kevin Weil
A talk on the use of Hadoop and Pig inside Twitter, focusing on the flexibility and simplicity of Pig, and the benefits of that for solving real-world big data problems.
Центр решений ФОРС. Презентации продуктов и технологий. Демонстрационный зал аппаратных средств. Проведение тренингов и тестирований. Проработка и оптимизация решений на стеке Oracle. Oracle Big Data Appliance
My presentation at OSPconf. Big Data Forum 2015 in Moscow on Informatica products and solutions in Big Data space: datawarehouse offload, managed data lake, big data Customer MDM, streaming analytics platform.
o Задумались о внедрении серьезной аналитической платформы? Отличная идея: производительная аналитика поможет ускорить принятие бизнес-решений и извлечь из данных новую ценную информацию. Узнайте, какими качествами должна обладать современная аналитическая платформа
Oracle announced new autonomous database and cyber security technologies powered by machine learning. The autonomous database can instantly patch itself while running with no downtime. It is fully automated and eliminates human labor for management. Oracle also announced a new highly automated cyber security technology that can detect and block attacks in near real-time by working with the autonomous database. Demos showed Oracle autonomous database outperforming Amazon Redshift and Oracle databases on Amazon cloud on real customer workloads, while being significantly less expensive.
Oracle OpenWorld 2016. Big Data referencesAndrey Akulov
The document outlines an agenda for a customer panel session on data warehousing and big data hosted by Oracle. The agenda includes presentations from five customer panelists on their data and analytics architectures, followed by a question and answer session. Each panelist presentation provides details on the customer's business and technical needs, data environments, transformation efforts, and analytics use cases.
The document discusses Oracle's Infrastructure as a Service (IaaS) offerings. It provides an overview of Oracle's compute, storage, and networking services including Elastic Compute, Dedicated Compute, Engineered Systems IaaS, and Bare Metal Compute. It describes how these services allow customers to migrate existing workloads to the cloud while maintaining control and using their existing tools and automation. The document also notes challenges that public cloud IaaS offerings have in addressing the needs of large enterprises due to differences from corporate data centers in software stacks, tooling, and network configuration options.
Oracle усиливает свои позиции на рынке Cloud Computing, приобретая компанию Ravello Systems - лидера на рынке nested virtualization (вложенная виртуализация) и стремительно развивая решения по переносу on-premise мощностей в облако.
The document discusses Oracle Enterprise Metadata Management (OEMM) which allows users to manage metadata, data lineage, and business glossaries. It harvests metadata from popular platforms including BI tools, ETL tools, databases, and big data tools. OEMM provides vertical lineage that shows traceability from business terms to IT artifacts, and horizontal lineage that traces columns and fields across multiple systems. It allows interactive exploration of metadata relationships through zooming and filtering capabilities.
Exalogic is an engineered system optimized for running Oracle middleware and applications. The document discusses Exalogic's hardware and software components, including the Exalogic Elastic Cloud Software (EECS) which provides virtualization, management, and cloud capabilities. Key features of the latest EECS 2.0.6 release include improved performance, stability, deployment tools, and the ability to run virtual and physical environments on the same Exalogic rack.
Эволюция Big Data и Information Management. Reference Architecture.Andrey Akulov
This document outlines Oracle's third generation Information Management Reference Architecture. It defines key concepts like the Raw Data Reservoir for storing immutable raw data, and the Foundation Data Layer for standardized enterprise data. It describes logical components like the Data Factory for ingestion and interpretation, and the Access and Performance Layer for enabling queries. It also provides design patterns for different use cases including a Discovery Lab, Information Platform, and Real-Time Event processing. Overall the architecture aims to practically manage all types of data at scale to maximize information value.
42. Oracle Big Data Appliance
В 2Xбыстрее
самосборного кластера1
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause
the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Tests document performance of
components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit
http://www.intel.com/performance. 1 - Configurations were compared by using the Big Data Benchmark for BigBench.Oracle* Big Data Appliance configuration included 6 nodes comprised of: Intel® Xeon® CPU E5-2699 v3 (HT enabled) with 128 GB DDR4, 12 X 4TB HDD, Infiniband network (1 connection) observed max
throughput 24 Gb/sec, Oracle* Linux Enterprise 6, and CDH* 5.4.4 with modified configuration. DIY cluster configuration included 6 nodes comprised of: Intel® Xeon® CPU E5-2699 v3 (HT enabled) with 128 GB DDR4, 1 x 64GB SSD for OS, 12 X 4TB HDD, 10Gb network (1 connection), CentOS* 6.6, CDH* 5.3.3 with
minimal changes.