Este documento presenta una introducción a SPARQL, el lenguaje de consulta para datos RDF. Explica que SPARQL se utiliza para realizar consultas en bases de datos RDF de forma similar a como SQL se utiliza para bases de datos relacionales. También describe características clave de SPARQL como filtros, funciones y operadores de consulta.
A brief introduction to DataScience with explaining of the concepts, algorithms, machine learning, supervised and unsupervised learning, clustering, statistics, data preprocessing, real-world applications etc.
It's part of a Data Science Corner Campaign where I will be discussing the fundamentals of DataScience, AIML, Statistics etc.
ChatGPT What It Is and How Writers Can Use It.pdfAdsy
Have you heard of ChatGPT? This smart model seems to change the way we work in the content marketing field.
We've investigated what this AI tool can do regarding content writing and ready to share the results.
Check this presentation to learn how this chatbot can assist you with content creation.
This document provides an overview of data warehousing, OLAP, data mining, and big data. It discusses how data warehouses integrate data from different sources to create a consistent view for analysis. OLAP enables interactive analysis of aggregated data through multidimensional views and calculations. Data mining finds hidden patterns in large datasets through techniques like predictive modeling, segmentation, link analysis and deviation detection. The document provides examples of how these technologies are used in industries like retail, banking and insurance.
This document discusses production planning and control. It describes production planning and control as directing and coordinating a firm's resources to meet production goals efficiently. The objectives of production planning and control include continuous production flow, optimized inventory, and customer satisfaction. The document outlines the steps in production planning and control, which include planning processes like routing, loading, and scheduling, as well as control processes like dispatching, expediting, and taking corrective actions. It also discusses how production planning and control differs across process, job, intermittent, and assembly industries.
ChatGPT is an AI chatbot created by OpenAI that can understand questions and provide answers in natural language. It was trained using reinforcement learning from human feedback on massive text datasets. In its initial release, ChatGPT is free to use but OpenAI may later monetize it due to high operating costs. While very capable, ChatGPT has limitations like an inability to gather new information or think critically.
The updated non-technical introduction to ChatGPT SEDA March 2023.pptxSue Beckingham
This webinar provides a brief history of ChatGPT and very recent developments in MS Bing and Edge and the launch of Google's Bard. Examples of how ChatGPT can be used and what implications and issues are foreseen are discussed.
MongoDB Atlas makes it easy to set up, operate, and scale your MongoDB deployments in the cloud. From high availability to scalability, security to disaster recovery - we've got you covered.
Automated: With MongoDB Atlas, you no longer need to worry about operational tasks such as provisioning, configuration, patching, upgrades, backups, and failure recovery. MongoDB Atlas provides the functionality and reliability you need, at the click of a button.
Flexible: Only MongoDB Atlas combines the critical capabilities of relational databases with the innovations of NoSQL. Radically simplify development and operations by delivering a diverse range of capabilities in a single, managed database platform.
Secure: MongoDB Atlas provides multiple levels of security for your database. These include robust access control, network isolation using Amazon VPC, IP whitelists, encryption of data in-flight using TLS/SSL, and optional encryption of the underlying filesystem.
Scalable: MongoDB Atlas grows with you, all with the click of a button. You can scale up across a range of instance sizes, and scale-out with automatic sharding. And you can do it with zero application downtime.
Highly Available: MongoDB Atlas is designed to offer exceptional uptime. Recovery from instance failures is transparent and fully automated. A minimum of three copies of your data are replicated across availability zones and continuously backed up.
High Performance: MongoDB Atlas provides high throughput and low latency for the most demanding workloads. Consistent, predictable performance eliminates the need for separate caching tiers, and delivers a far better price-performance ratio compared to traditional database software.
Big data is data that is too large or complex for traditional data processing applications to analyze in a timely manner. It is characterized by high volume, velocity, and variety. Big data comes from a variety of sources, including business transactions, social media, sensors, and call center notes. It can be structured, unstructured, or semi-structured. Tools used for big data include NoSQL databases, MapReduce, HDFS, and analytics platforms. Big data analytics extracts useful insights from large, diverse data sets. It has applications in various domains like healthcare, retail, and transportation.
Prompt Engineering - Strategic Impact on the Organizational Transformationsabnees
Prompt Engineering is the icing on the cake how it changes the organization and the impact it will have, evolution of new job category and architecture.
The document provides an introduction and overview of MongoDB, including what NoSQL is, the different types of NoSQL databases, when to use MongoDB, its key features like scalability and flexibility, how to install and use basic commands like creating databases and collections, and references for further learning.
Project Objective: To gain local relevancy while maintaining corporate brand, margin goals, and commitment to shareholders in order to provide a relevant shopping experience for Target customers.
Managed a team of four members to obtain an overall goal; Studied demographics and geographics in order to determine consumer preferences base on regionalization throughout the United States and Canada, Researched and developed the concept of mobile application implementation in Target stores across these regions, allowing sales and product promotions to be delivered to customers via email & mobile notifications; Regionalized areas of the United States and Canada based on location of Distribution Centers, Food Distribution Centers, and Corporate Offices to analyze best delivery of implementation of new ideas; Consulted with upper-management to improve product niche merchandising and later presented ideas to these members of the Target team; Placed 2nd out of 24 competing teams in overall competition.
Build real-time streaming data pipelines to AWS with Confluentconfluent
Traditional data pipelines often face scalability issues and challenges related to cost, their monolithic design, and reliance on batch data processing. They also typically operate under the premise that all data needs to be stored in a single centralized data source before it's put to practical use. Confluent Cloud on Amazon Web Services (AWS) provides a fully managed cloud-native platform that helps you simplify the way you build real-time data flows using streaming data pipelines and Apache Kafka.
Native, Web or Hybrid Mobile App Development?Sura Gonzalez
The document discusses different approaches to developing mobile apps, including native apps, web apps, and hybrid apps. Native apps are developed specifically for a single platform using that platform's tools and programming languages. They have full access to device features but have high development and maintenance costs. Web apps are developed with web technologies like HTML, CSS, and JavaScript and run in a mobile browser, allowing cross-platform use but more limited access to device features. Hybrid apps combine native and web technologies by wrapping web views in a native container, giving them full device access and lower costs than native apps. The document explores the characteristics and tradeoffs of each approach.
In this presentation, Raghavendra BM of Valuebound has discussed the basics of MongoDB - an open-source document database and leading NoSQL database.
----------------------------------------------------------
Get Socialistic
Our website: http://valuebound.com/
LinkedIn: http://bit.ly/2eKgdux
Facebook: https://www.facebook.com/valuebound/
Twitter: http://bit.ly/2gFPTi8
The document summarizes the history and evolution of non-relational databases, known as NoSQL databases. It discusses early database systems like MUMPS and IMS, the development of the relational model in the 1970s, and more recent NoSQL databases developed by companies like Google, Amazon, Facebook to handle large, dynamic datasets across many servers. Pioneering systems like Google's Bigtable and Amazon's Dynamo used techniques like distributed indexing, versioning, and eventual consistency that influenced many open-source NoSQL databases today.
The document provides an overview of the Databricks platform, which offers a unified environment for data engineering, analytics, and AI. It describes how Databricks addresses the complexity of managing data across siloed systems by providing a single "data lakehouse" platform where all data and analytics workloads can be run. Key features highlighted include Delta Lake for ACID transactions on data lakes, auto loader for streaming data ingestion, notebooks for interactive coding, and governance tools to securely share and catalog data and models.
Use Case Patterns for LLM Applications (1).pdfM Waleed Kadous
What are the "use case patterns" for deploying LLMs into production? Understanding these will allow you to spot "LLM-shaped" problems in your own industry.
This document provides an overview of Tableau, a data visualization software. It outlines the agenda for the presentation, which will cover connecting to data, visual analytics with Tableau, dashboards and stories, calculations, and mapping capabilities. Tableau allows users to connect to various data sources, transform raw data into interactive visualizations, and share dashboards or publish them online. It is a leading tool for data analysis and visualization.
Polestar we hope to bring the power of data to organizations across industries helping them analyze billions of data points and data sets to provide real-time insights, and enabling them to make critical decisions to grow their business.
This document provides an overview and introduction to MongoDB, an open-source, high-performance NoSQL database. It outlines MongoDB's features like document-oriented storage, replication, sharding, and CRUD operations. It also discusses MongoDB's data model, comparisons to relational databases, and common use cases. The document concludes that MongoDB is well-suited for applications like content management, inventory management, game development, social media storage, and sensor data databases due to its flexible schema, distributed deployment, and low latency.
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
MongoDB Atlas makes it easy to set up, operate, and scale your MongoDB deployments in the cloud. From high availability to scalability, security to disaster recovery - we've got you covered.
Automated: With MongoDB Atlas, you no longer need to worry about operational tasks such as provisioning, configuration, patching, upgrades, backups, and failure recovery. MongoDB Atlas provides the functionality and reliability you need, at the click of a button.
Flexible: Only MongoDB Atlas combines the critical capabilities of relational databases with the innovations of NoSQL. Radically simplify development and operations by delivering a diverse range of capabilities in a single, managed database platform.
Secure: MongoDB Atlas provides multiple levels of security for your database. These include robust access control, network isolation using Amazon VPC, IP whitelists, encryption of data in-flight using TLS/SSL, and optional encryption of the underlying filesystem.
Scalable: MongoDB Atlas grows with you, all with the click of a button. You can scale up across a range of instance sizes, and scale-out with automatic sharding. And you can do it with zero application downtime.
Highly Available: MongoDB Atlas is designed to offer exceptional uptime. Recovery from instance failures is transparent and fully automated. A minimum of three copies of your data are replicated across availability zones and continuously backed up.
High Performance: MongoDB Atlas provides high throughput and low latency for the most demanding workloads. Consistent, predictable performance eliminates the need for separate caching tiers, and delivers a far better price-performance ratio compared to traditional database software.
Big data is data that is too large or complex for traditional data processing applications to analyze in a timely manner. It is characterized by high volume, velocity, and variety. Big data comes from a variety of sources, including business transactions, social media, sensors, and call center notes. It can be structured, unstructured, or semi-structured. Tools used for big data include NoSQL databases, MapReduce, HDFS, and analytics platforms. Big data analytics extracts useful insights from large, diverse data sets. It has applications in various domains like healthcare, retail, and transportation.
Prompt Engineering - Strategic Impact on the Organizational Transformationsabnees
Prompt Engineering is the icing on the cake how it changes the organization and the impact it will have, evolution of new job category and architecture.
The document provides an introduction and overview of MongoDB, including what NoSQL is, the different types of NoSQL databases, when to use MongoDB, its key features like scalability and flexibility, how to install and use basic commands like creating databases and collections, and references for further learning.
Project Objective: To gain local relevancy while maintaining corporate brand, margin goals, and commitment to shareholders in order to provide a relevant shopping experience for Target customers.
Managed a team of four members to obtain an overall goal; Studied demographics and geographics in order to determine consumer preferences base on regionalization throughout the United States and Canada, Researched and developed the concept of mobile application implementation in Target stores across these regions, allowing sales and product promotions to be delivered to customers via email & mobile notifications; Regionalized areas of the United States and Canada based on location of Distribution Centers, Food Distribution Centers, and Corporate Offices to analyze best delivery of implementation of new ideas; Consulted with upper-management to improve product niche merchandising and later presented ideas to these members of the Target team; Placed 2nd out of 24 competing teams in overall competition.
Build real-time streaming data pipelines to AWS with Confluentconfluent
Traditional data pipelines often face scalability issues and challenges related to cost, their monolithic design, and reliance on batch data processing. They also typically operate under the premise that all data needs to be stored in a single centralized data source before it's put to practical use. Confluent Cloud on Amazon Web Services (AWS) provides a fully managed cloud-native platform that helps you simplify the way you build real-time data flows using streaming data pipelines and Apache Kafka.
Native, Web or Hybrid Mobile App Development?Sura Gonzalez
The document discusses different approaches to developing mobile apps, including native apps, web apps, and hybrid apps. Native apps are developed specifically for a single platform using that platform's tools and programming languages. They have full access to device features but have high development and maintenance costs. Web apps are developed with web technologies like HTML, CSS, and JavaScript and run in a mobile browser, allowing cross-platform use but more limited access to device features. Hybrid apps combine native and web technologies by wrapping web views in a native container, giving them full device access and lower costs than native apps. The document explores the characteristics and tradeoffs of each approach.
In this presentation, Raghavendra BM of Valuebound has discussed the basics of MongoDB - an open-source document database and leading NoSQL database.
----------------------------------------------------------
Get Socialistic
Our website: http://valuebound.com/
LinkedIn: http://bit.ly/2eKgdux
Facebook: https://www.facebook.com/valuebound/
Twitter: http://bit.ly/2gFPTi8
The document summarizes the history and evolution of non-relational databases, known as NoSQL databases. It discusses early database systems like MUMPS and IMS, the development of the relational model in the 1970s, and more recent NoSQL databases developed by companies like Google, Amazon, Facebook to handle large, dynamic datasets across many servers. Pioneering systems like Google's Bigtable and Amazon's Dynamo used techniques like distributed indexing, versioning, and eventual consistency that influenced many open-source NoSQL databases today.
The document provides an overview of the Databricks platform, which offers a unified environment for data engineering, analytics, and AI. It describes how Databricks addresses the complexity of managing data across siloed systems by providing a single "data lakehouse" platform where all data and analytics workloads can be run. Key features highlighted include Delta Lake for ACID transactions on data lakes, auto loader for streaming data ingestion, notebooks for interactive coding, and governance tools to securely share and catalog data and models.
Use Case Patterns for LLM Applications (1).pdfM Waleed Kadous
What are the "use case patterns" for deploying LLMs into production? Understanding these will allow you to spot "LLM-shaped" problems in your own industry.
This document provides an overview of Tableau, a data visualization software. It outlines the agenda for the presentation, which will cover connecting to data, visual analytics with Tableau, dashboards and stories, calculations, and mapping capabilities. Tableau allows users to connect to various data sources, transform raw data into interactive visualizations, and share dashboards or publish them online. It is a leading tool for data analysis and visualization.
Polestar we hope to bring the power of data to organizations across industries helping them analyze billions of data points and data sets to provide real-time insights, and enabling them to make critical decisions to grow their business.
This document provides an overview and introduction to MongoDB, an open-source, high-performance NoSQL database. It outlines MongoDB's features like document-oriented storage, replication, sharding, and CRUD operations. It also discusses MongoDB's data model, comparisons to relational databases, and common use cases. The document concludes that MongoDB is well-suited for applications like content management, inventory management, game development, social media storage, and sensor data databases due to its flexible schema, distributed deployment, and low latency.
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
1) Big data standards are needed to make data understandable, reusable, and shareable across different databases and domains.
2) Effective standards require reporting sufficient experimental details and context in both human-readable and machine-readable formats.
3) Developing standards is a collaborative process involving different stakeholder groups to define requirements, vocabularies, and data models through both formal standards bodies and grassroots organizations.
This document provides information about an art exhibition titled "MAXI mini" held from November 2-14, 2012 at the CQ Contemporary Artists Gallery located at the Walter Reid Cultural Centre. The exhibition featured works from various artists in various mediums such as acrylic, oil, and colour pencils. Artwork sizes ranged from small "mini" pieces that were 8x10 inches to larger "MAXI" pieces that were 3x4 feet. Price information is provided for many of the artworks.
This document provides an overview of patterns for scalability, availability, and stability in distributed systems. It discusses general recommendations like immutability and referential transparency. It covers scalability trade-offs around performance vs scalability, latency vs throughput, and availability vs consistency. It then describes various patterns for scalability including managing state through partitioning, caching, sharding databases, and using distributed caching. It also covers patterns for managing behavior through event-driven architecture, compute grids, load balancing, and parallel computing. Availability patterns like fail-over, replication, and fault tolerance are discussed. The document provides examples of popular technologies that implement many of these patterns.
This document provides an overview and introduction to NoSQL databases. It begins with an agenda that explores key-value, document, column family, and graph databases. For each type, 1-2 specific databases are discussed in more detail, including their origins, features, and use cases. Key databases mentioned include Voldemort, CouchDB, MongoDB, HBase, Cassandra, and Neo4j. The document concludes with references for further reading on NoSQL databases and related topics.
Enabling the Industry 4.0 vision: Hype? Real Opportunity!Boris Otto
These are the slides I used at my key note speech at the NASSCOM Engineering Summit on October 7, 2015, in Pune, India. The presentation sets Industry 4.0 in context of smart services and points to the key role data plays.
PostgreSQL is a well-known relational database. But in the last few years, it has gained capabilities that previously belonged only to "NoSQL" databases. In this talk, I describe several of PostgreSQL that give it such capabilities.
This document provides an overview of NoSQL databases and Oracle's perspective on them. It begins by explaining that NoSQL databases aim to be highly available and able to scale horizontally. It then discusses some of the origins and types of NoSQL databases, including key-value stores, document databases, column family stores, and graph databases. It also covers Brewer's CAP theorem and how NoSQL databases sacrifice consistency for availability and partition tolerance. Finally, it discusses how Oracle has incorporated some NoSQL concepts into its own database technologies over time.
This document provides an introduction to linked data and the semantic web. It discusses how the current web contains documents that are difficult for computers to understand, but linked data publishes structured data on the web using common standards like RDF and URIs. This allows data to be interlinked and queried using SPARQL. Publishing data as linked data makes the web appear as one huge global database. There are now many incentives for organizations to publish their data as linked data, as it enables data sharing and integration in addition to potential benefits like semantic search engine optimization. Linked data is a growing trend with many large organizations and governments now publishing data.
This document introduces responsive web design (RWD). It discusses how early web design copied print and led to separate websites for different devices, which was expensive and impractical. RWD emerged to create designs that change based on screen size using CSS media queries. The key aspects of RWD are designing websites that respond appropriately to different devices through fluid, flexible layouts. It emphasizes designing for all possible screen widths from the start using relative units and media queries. Testing in the browser across screen sizes is important.
The document discusses the need for ontologies that can better support linking and mapping between large, distributed databases on the semantic web. While OWL has been successful in some domains, it lacks expressivity for tasks like representing part-whole relations, temporal reasoning, and procedural knowledge. A new generation of ontology languages may need to relax requirements like decidability in order to more powerfully represent relationships that are important for data integration and discovery across multiple knowledge sources.
Following Google: Don’t Follow the Followers, Follow the LeadersC4Media
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1B5gyu4.
Mark Madsen explains the history of databases and data processing over the past decades and looks where the industry will go. Filmed at qconsf.com.
Mark Madsen is a researcher, consultant and former CTO.
Rob Harrop- Key Note The God, the Bad and the Ugly - NoSQL matters Paris 2015NoSQLmatters
The impact that NoSQL has had on the technology community cannot be overstated. The proliferation of new and exciting data systems has led to a slew of interesting solutions to problems that were once solved the relational way. In this session we explore all that is great and good about NoSQL: the innovative software, the clever storage paradigms and the reigniting of developer interest in data access. It is unfortunate that NoSQL is not only a force for good in our community. We'll explore some of the darker corners of NoSQL: the disregard for years of proven technology, the overbearing hype, the overblown marketing and the ever present arguments over which technology is best. We close the session by exploring what can be done to extract even more value from the NoSQL movement, where we can improve how the community interacts with the larger technology community and what the future holds for data access technologies.
SQL is a language used to communicate with database servers and manage data. It allows users to query, insert, update, and manipulate data. Some key points covered in the document include:
- SQL is used to interact with many database servers like MySQL, PostgreSQL, Oracle, and Sybase.
- It allows querying and filtering of data using operators like SELECT, WHERE, BETWEEN, IN, LIKE, and more.
- Data can be inserted into database tables using INSERT or loaded in bulk using LOAD DATA.
- The UPDATE statement allows modifying existing data in tables.
This document provides an introduction to SQL. It defines SQL as a structured query language used to communicate with database servers. Key topics covered include SQL's uses for live queries, report generation, and normalization of data. The document outlines different variable types supported by SQL, how to create tables and insert data, comparison operators used in the WHERE clause to filter data, and how to update tables using the UPDATE command.
This document discusses NoSQL databases and why there are so many types. It begins by explaining why NoSQL databases were created, such as to handle large datasets and complex data structures that don't fit relational databases. It then discusses why there are many types of NoSQL databases, noting that they take different approaches to meet the CAP theorem by prioritizing two of availability, consistency, and partition tolerance. The document outlines several types of NoSQL databases and their key aspects before briefly discussing NewSQL databases that aim to provide scalability with ACID compliance.
Linked Data: The Real Web 2.0 (from 2008)Uche Ogbuji
"Linking Open Data (LOD) is a community initiative moving the Web from the idea of separated documents to a wide information space of data. The key principles of LOD are that it is simple, readily adaptable by Web developers, and complements many other popular Web trends. Linked, open data is the real substance of Web 2.0, and not flashy AJAX effects. Learn how to make your data more widely used by making its components easier to discover, more valuable, and easier for people to reuse—in ways you might not anticipate."
The document discusses the Semantic Web and linked data. It defines the current web as consisting of documents linked by hyperlinks that are readable by humans but difficult for computers to understand. The Semantic Web aims to publish structured data on the web using common standards like RDF so that data can be linked, queried, and integrated across sources. Key points include:
- The Semantic Web uses RDF to represent data as a graph so that data from different sources can be linked together.
- Linked data follows principles like using URIs to identify things and including links to other related data.
- Query languages like SPARQL allow searching and integrating linked data from multiple sources.
- There are now
This document provides an overview and introduction to NoSQL databases. It discusses how NoSQL databases were developed to address issues with scaling relational databases to handle large volumes of data with high velocity. The document outlines several categories of NoSQL databases, including key-value, document, columnar, and graph databases, and provides examples of databases that fall within each category. It also discusses some of the core concepts in NoSQL, such as eventual consistency and relaxing ACID properties, in order to prioritize availability and partition tolerance at scale.
Is emergence of NoSQL killed RDBMS and SQL? This slide discusses what is NoSQL and it's history. This also discusses briefly about polyglot persistence.
ActiveRecord is an ORM (Object Relational Mapper) that comes with Ruby on Rails that allows developers to interact with databases using plain Ruby objects instead of SQL. It maps database tables and rows to Ruby classes and objects. It handles converting Ruby code into SQL queries behind the scenes. ActiveRecord follows conventions like naming and schema conventions so that configuration is minimized. It establishes relationships between models, allowing developers to easily retrieve associated records without writing raw SQL queries.
No se pierda esta oportunidad de conocer las ventajas de NoSQL. Participe en nuestro seminario web y descubra:
Qué significa el término NoSQL
Qué diferencias hay entre los almacenes clave-valor, columna ancha, grafo y de documentos
Qué significa el término «multimodelo»
Enterprise NoSQL: Silver Bullet or Poison PillBilly Newport
Enterprise NoSQL Silver bullet or poison pill? discusses the pros and cons of NoSQL databases compared to SQL databases. While SQL databases will remain prevalent, NoSQL databases offer alternative data storage options with different tradeoffs. NoSQL systems typically relax constraints of SQL like schema rigidity in exchange for implementation flexibility, but this comes at the cost of features like joins and global indexes. NoSQL also shifts the system of record away from a single database, requiring applications to handle consistency and creating multiple copies of data to scale.
The document discusses NoSQL databases and MapReduce. It provides historical context on how databases were not adequate for the large amounts of data being accumulated from the web. It describes Brewer's Conjecture and CAP Theorem, which contributed to the rise of NoSQL databases. It then defines what NoSQL databases are, provides examples of different types, and discusses some large-scale implementations like Amazon SimpleDB, Google Datastore, and Hadoop MapReduce.
Lessons learnt coverting from SQL to NoSQLEnda Farrell
The document summarizes lessons learned from migrating a large, highly relational SQL database with tens of millions of records across 32 tables to a "classic" NoSQL key-value store. Some key challenges included running the SQL and NoSQL databases in parallel during migration, reconciling differences between the data stores, and addressing more limited querying capabilities of NoSQL. It emphasizes planning for people impacts and tooling needs like data migration utilities and improved testing. Overall, the migration increased flexibility but also introduced complexity, so thorough planning and monitoring were important lessons.
NoSQL, SQL, NewSQL - methods of structuring data.Tony Rogerson
Today’s environment is a polyglot database, that is to say, it’s made up of a number of different database sources and possibly types. In this session we’ll look at some of the options of storing data – relational, key/value, document etc. I’ll overview what is SQL, NoSQL and NewSQL to give you some context for today’s world of data storage.
Hard to Reach Users in Easy to Reach PlacesMike Crabb
The aim of this research project is to develop an accessible office workstation for disabled users. This includes investigating various input and output devices that can be used by disabled users and incorporating them into a workstation application to increase bandwidth for each user.
How do we design accessible services for everyone while also caring about the UX? This presentation looks at a model of accessibility that can be used for all users and we show how this works for making accessible UX-friendly tools for television, board gamers, and developers. Presented at UX Scotland 2018
The document outlines the academic peer review process. It involves submitting a paper to a conference, which is then assigned to an area chair and sent to reviewers. The reviewers create scores and feedback, which are used by the area chair to write a summary and determine if the paper is accepted or rejected. The process relies on expert reviewers to evaluate the validity and significance of contributions. The document also provides guidance on conducting a detailed peer review, including performing multiple reads of the paper, checking for flaws, structuring a review report, and focusing on strengths as well as areas for improvement.
This document provides an overview of qualitative data analysis techniques including inductive and deductive approaches, coding methods like open coding and axial coding, developing code hierarchies, comparative analysis using tables and models, and ensuring analytic quality through reflexivity. It discusses writing as a tool for analysis, such as keeping a research diary, and the importance of anonymity and validity in qualitative research ethics.
Conversation Discourse and Document AnalysisMike Crabb
This document provides information on studying discourse through analyzing conversations and documents. It discusses generating an archive of various materials, the practicalities of recording audio and video sources, and methods for transcribing recordings. Conversation analysis is explored by examining structural organization and how refusals are handled. Analyzing documents involves considering how and where they were read or used. Overall, the document outlines different approaches for exploring language use through discourse studies.
1. Focus groups can be used in various sectors like marketing, public relations, health services, and social science research to generate insights into attitudes, behaviors, and decision-making processes.
2. Proper research design and planning is required when conducting focus groups. This involves considering the facilitator, setting, participant size and composition, recruitment methods, topic guide, and addressing any ethical issues.
3. Focus groups are best for exploring perspectives and meanings that people ascribe to ideas and experiences. They provide insights into how views are formed and modified in a group context.
This document provides an overview of conducting interviews for research purposes. It discusses the steps involved, which include designing the study, conducting interviews, ensuring quality and ethical standards, and analyzing the data. Key aspects covered include developing interview questions, creating an engaging dialogue with participants, addressing confidentiality and consent, and using different analytic approaches such as having participants validate interpretations. The overall aim is to understand participants' perspectives in a rigorous yet empathetic manner.
This document provides an overview of qualitative research methods. It discusses what qualitative research is, how to get the right sample, important aspects of qualitative research design such as research questions and comparisons. It also covers organizing a qualitative study, ethics, and designing for different qualitative methods like interviews, focus groups, and ethnography. Key considerations for each method are outlined.
Presentation on designing for different types of accessibility challenges. Permanent, situational, and temporary aspects of accessibility are discussed.
This document discusses accessibility in gaming. It presents a model of accessibility that includes visual, cognitive, physical, communication, emotional, socio-economic, and intersectional factors. It discusses permanent, situational, and temporary challenges and provides examples. It addresses the current state of accessibility in games and outlines areas for future improvement, including increased use of simulation and guidelines. The document advocates for designing games that are both accessible and fun.
The document discusses principles of pattern perception and map design. It covers Gestalt's laws of proximity, similarity, connectedness, continuity, symmetry, closure, and relative size. It then discusses representing vector fields through showing direction, magnitude, and orientation. It also discusses the perceptual syntax of diagrams through creating nodes and relationships. Finally, it discusses the visual grammar of maps through using contours, textures, colors, and lines to represent geographic regions, paths, and point entities.
Using Cloud in an Enterprise EnvironmentMike Crabb
Introduction to the different cloud models that exist and how they can be used in an enterprise level environment. Short discussion on UK DPA and its relevance to cloud computing
Teaching Cloud to the Programmers of TomorrowMike Crabb
This document discusses Robert Gordon University's use of cloud computing in its computer science curriculum. It describes how courses from first year HTML to final year projects utilize cloud servers for teaching web programming and deploying student work. This allows students to focus on coding rather than server maintenance and eases collaboration. Using the cloud improves students' employability by gaining experience with tools like Git and deploying to platforms such as Microsoft Azure. It also benefits lecturers by increasing security, stability and trackability compared to maintaining physical servers. The cloud facilitates research projects through easier code and data sharing between collaborators. However, cloud services require flexibility as no single solution meets all needs.
This document discusses different ways that PHP can receive input from forms and other sources like databases. It covers using GET and POST methods to pass variables between pages via URLs or form submissions. It also provides an example of linking a form to a database by connecting in PHP, obtaining POST variables, writing an SQL query, and redirecting to another page that displays the database records.
Proactive Vulnerability Detection in Source Code Using Graph Neural Networks:...Ranjan Baisak
As software complexity grows, traditional static analysis tools struggle to detect vulnerabilities with both precision and context—often triggering high false positive rates and developer fatigue. This article explores how Graph Neural Networks (GNNs), when applied to source code representations like Abstract Syntax Trees (ASTs), Control Flow Graphs (CFGs), and Data Flow Graphs (DFGs), can revolutionize vulnerability detection. We break down how GNNs model code semantics more effectively than flat token sequences, and how techniques like attention mechanisms, hybrid graph construction, and feedback loops significantly reduce false positives. With insights from real-world datasets and recent research, this guide shows how to build more reliable, proactive, and interpretable vulnerability detection systems using GNNs.
FL Studio Producer Edition Crack 2025 Full Versiontahirabibi60507
Copy & Past Link 👉👉
http://drfiles.net/
FL Studio is a Digital Audio Workstation (DAW) software used for music production. It's developed by the Belgian company Image-Line. FL Studio allows users to create and edit music using a graphical user interface with a pattern-based music sequencer.
Societal challenges of AI: biases, multilinguism and sustainabilityJordi Cabot
Towards a fairer, inclusive and sustainable AI that works for everybody.
Reviewing the state of the art on these challenges and what we're doing at LIST to test current LLMs and help you select the one that works best for you
Designing AI-Powered APIs on Azure: Best Practices& ConsiderationsDinusha Kumarasiri
AI is transforming APIs, enabling smarter automation, enhanced decision-making, and seamless integrations. This presentation explores key design principles for AI-infused APIs on Azure, covering performance optimization, security best practices, scalability strategies, and responsible AI governance. Learn how to leverage Azure API Management, machine learning models, and cloud-native architectures to build robust, efficient, and intelligent API solutions
F-Secure Freedome VPN 2025 Crack Plus Activation New Versionsaimabibi60507
Copy & Past Link 👉👉
https://dr-up-community.info/
F-Secure Freedome VPN is a virtual private network service developed by F-Secure, a Finnish cybersecurity company. It offers features such as Wi-Fi protection, IP address masking, browsing protection, and a kill switch to enhance online privacy and security .
How can one start with crypto wallet development.pptxlaravinson24
This presentation is a beginner-friendly guide to developing a crypto wallet from scratch. It covers essential concepts such as wallet types, blockchain integration, key management, and security best practices. Ideal for developers and tech enthusiasts looking to enter the world of Web3 and decentralized finance.
Secure Test Infrastructure: The Backbone of Trustworthy Software DevelopmentShubham Joshi
A secure test infrastructure ensures that the testing process doesn’t become a gateway for vulnerabilities. By protecting test environments, data, and access points, organizations can confidently develop and deploy software without compromising user privacy or system integrity.
TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...Andre Hora
Unittest and pytest are the most popular testing frameworks in Python. Overall, pytest provides some advantages, including simpler assertion, reuse of fixtures, and interoperability. Due to such benefits, multiple projects in the Python ecosystem have migrated from unittest to pytest. To facilitate the migration, pytest can also run unittest tests, thus, the migration can happen gradually over time. However, the migration can be timeconsuming and take a long time to conclude. In this context, projects would benefit from automated solutions to support the migration process. In this paper, we propose TestMigrationsInPy, a dataset of test migrations from unittest to pytest. TestMigrationsInPy contains 923 real-world migrations performed by developers. Future research proposing novel solutions to migrate frameworks in Python can rely on TestMigrationsInPy as a ground truth. Moreover, as TestMigrationsInPy includes information about the migration type (e.g., changes in assertions or fixtures), our dataset enables novel solutions to be verified effectively, for instance, from simpler assertion migrations to more complex fixture migrations. TestMigrationsInPy is publicly available at: https://github.com/altinoalvesjunior/TestMigrationsInPy.
Join Ajay Sarpal and Miray Vu to learn about key Marketo Engage enhancements. Discover improved in-app Salesforce CRM connector statistics for easy monitoring of sync health and throughput. Explore new Salesforce CRM Synch Dashboards providing up-to-date insights into weekly activity usage, thresholds, and limits with drill-down capabilities. Learn about proactive notifications for both Salesforce CRM sync and product usage overages. Get an update on improved Salesforce CRM synch scale and reliability coming in Q2 2025.
Key Takeaways:
Improved Salesforce CRM User Experience: Learn how self-service visibility enhances satisfaction.
Utilize Salesforce CRM Synch Dashboards: Explore real-time weekly activity data.
Monitor Performance Against Limits: See threshold limits for each product level.
Get Usage Over-Limit Alerts: Receive notifications for exceeding thresholds.
Learn About Improved Salesforce CRM Scale: Understand upcoming cloud-based incremental sync.
This presentation explores code comprehension challenges in scientific programming based on a survey of 57 research scientists. It reveals that 57.9% of scientists have no formal training in writing readable code. Key findings highlight a "documentation paradox" where documentation is both the most common readability practice and the biggest challenge scientists face. The study identifies critical issues with naming conventions and code organization, noting that 100% of scientists agree readable code is essential for reproducible research. The research concludes with four key recommendations: expanding programming education for scientists, conducting targeted research on scientific code quality, developing specialized tools, and establishing clearer documentation guidelines for scientific software.
Presented at: The 33rd International Conference on Program Comprehension (ICPC '25)
Date of Conference: April 2025
Conference Location: Ottawa, Ontario, Canada
Preprint: https://arxiv.org/abs/2501.10037
Get & Download Wondershare Filmora Crack Latest [2025]saniaaftab72555
Copy & Past Link 👉👉
https://dr-up-community.info/
Wondershare Filmora is a video editing software and app designed for both beginners and experienced users. It's known for its user-friendly interface, drag-and-drop functionality, and a wide range of tools and features for creating and editing videos. Filmora is available on Windows, macOS, iOS (iPhone/iPad), and Android platforms.
🌍📱👉COPY LINK & PASTE ON GOOGLE http://drfiles.net/ 👈🌍
Adobe Illustrator is a powerful, professional-grade vector graphics software used for creating a wide range of designs, including logos, icons, illustrations, and more. Unlike raster graphics (like photos), which are made of pixels, vector graphics in Illustrator are defined by mathematical equations, allowing them to be scaled up or down infinitely without losing quality.
Here's a more detailed explanation:
Key Features and Capabilities:
Vector-Based Design:
Illustrator's foundation is its use of vector graphics, meaning designs are created using paths, lines, shapes, and curves defined mathematically.
Scalability:
This vector-based approach allows for designs to be resized without any loss of resolution or quality, making it suitable for various print and digital applications.
Design Creation:
Illustrator is used for a wide variety of design purposes, including:
Logos and Brand Identity: Creating logos, icons, and other brand assets.
Illustrations: Designing detailed illustrations for books, magazines, web pages, and more.
Marketing Materials: Creating posters, flyers, banners, and other marketing visuals.
Web Design: Designing web graphics, including icons, buttons, and layouts.
Text Handling:
Illustrator offers sophisticated typography tools for manipulating and designing text within your graphics.
Brushes and Effects:
It provides a range of brushes and effects for adding artistic touches and visual styles to your designs.
Integration with Other Adobe Software:
Illustrator integrates seamlessly with other Adobe Creative Cloud apps like Photoshop, InDesign, and Dreamweaver, facilitating a smooth workflow.
Why Use Illustrator?
Professional-Grade Features:
Illustrator offers a comprehensive set of tools and features for professional design work.
Versatility:
It can be used for a wide range of design tasks and applications, making it a versatile tool for designers.
Industry Standard:
Illustrator is a widely used and recognized software in the graphic design industry.
Creative Freedom:
It empowers designers to create detailed, high-quality graphics with a high degree of control and precision.
Explaining GitHub Actions Failures with Large Language Models Challenges, In...ssuserb14185
GitHub Actions (GA) has become the de facto tool that developers use to automate software workflows, seamlessly building, testing, and deploying code. Yet when GA fails, it disrupts development, causing delays and driving up costs. Diagnosing failures becomes especially challenging because error logs are often long, complex and unstructured. Given these difficulties, this study explores the potential of large language models (LLMs) to generate correct, clear, concise, and actionable contextual descriptions (or summaries) for GA failures, focusing on developers’ perceptions of their feasibility and usefulness. Our results show that over 80% of developers rated LLM explanations positively in terms of correctness for simpler/small logs. Overall, our findings suggest that LLMs can feasibly assist developers in understanding common GA errors, thus, potentially reducing manual analysis. However, we also found that improved reasoning abilities are needed to support more complex CI/CD scenarios. For instance, less experienced developers tend to be more positive on the described context, while seasoned developers prefer concise summaries. Overall, our work offers key insights for researchers enhancing LLM reasoning, particularly in adapting explanations to user expertise.
https://arxiv.org/abs/2501.16495
Adobe Lightroom Classic Crack FREE Latest link 2025kashifyounis067
🌍📱👉COPY LINK & PASTE ON GOOGLE http://drfiles.net/ 👈🌍
Adobe Lightroom Classic is a desktop-based software application for editing and managing digital photos. It focuses on providing users with a powerful and comprehensive set of tools for organizing, editing, and processing their images on their computer. Unlike the newer Lightroom, which is cloud-based, Lightroom Classic stores photos locally on your computer and offers a more traditional workflow for professional photographers.
Here's a more detailed breakdown:
Key Features and Functions:
Organization:
Lightroom Classic provides robust tools for organizing your photos, including creating collections, using keywords, flags, and color labels.
Editing:
It offers a wide range of editing tools for making adjustments to color, tone, and more.
Processing:
Lightroom Classic can process RAW files, allowing for significant adjustments and fine-tuning of images.
Desktop-Focused:
The application is designed to be used on a computer, with the original photos stored locally on the hard drive.
Non-Destructive Editing:
Edits are applied to the original photos in a non-destructive way, meaning the original files remain untouched.
Key Differences from Lightroom (Cloud-Based):
Storage Location:
Lightroom Classic stores photos locally on your computer, while Lightroom stores them in the cloud.
Workflow:
Lightroom Classic is designed for a desktop workflow, while Lightroom is designed for a cloud-based workflow.
Connectivity:
Lightroom Classic can be used offline, while Lightroom requires an internet connection to sync and access photos.
Organization:
Lightroom Classic offers more advanced organization features like Collections and Keywords.
Who is it for?
Professional Photographers:
PCMag notes that Lightroom Classic is a popular choice among professional photographers who need the flexibility and control of a desktop-based application.
Users with Large Collections:
Those with extensive photo collections may prefer Lightroom Classic's local storage and robust organization features.
Users who prefer a traditional workflow:
Users who prefer a more traditional desktop workflow, with their original photos stored on their computer, will find Lightroom Classic a good fit.
Interactive Odoo Dashboard for various business needs can provide users with dynamic, visually appealing dashboards tailored to their specific requirements. such a module that could support multiple dashboards for different aspects of a business
✅Visit And Buy Now : https://bit.ly/3VojWza
✅This Interactive Odoo dashboard module allow user to create their own odoo interactive dashboards for various purpose.
App download now :
Odoo 18 : https://bit.ly/3VojWza
Odoo 17 : https://bit.ly/4h9Z47G
Odoo 16 : https://bit.ly/3FJTEA4
Odoo 15 : https://bit.ly/3W7tsEB
Odoo 14 : https://bit.ly/3BqZDHg
Odoo 13 : https://bit.ly/3uNMF2t
Try Our website appointment booking odoo app : https://bit.ly/3SvNvgU
👉Want a Demo ?📧 [email protected]
➡️Contact us for Odoo ERP Set up : 091066 49361
👉Explore more apps: https://bit.ly/3oFIOCF
👉Want to know more : 🌐 https://www.axistechnolabs.com/
#odoo #odoo18 #odoo17 #odoo16 #odoo15 #odooapps #dashboards #dashboardsoftware #odooerp #odooimplementation #odoodashboardapp #bestodoodashboard #dashboardapp #odoodashboard #dashboardmodule #interactivedashboard #bestdashboard #dashboard #odootag #odooservices #odoonewfeatures #newappfeatures #odoodashboardapp #dynamicdashboard #odooapp #odooappstore #TopOdooApps #odooapp #odooexperience #odoodevelopment #businessdashboard #allinonedashboard #odooproducts
2. THE
WHY WE ARE STORING MORE DATA
NOW THAN WE EVER HAVE
BEFORE
3. THE
WHY WE ARE STORING MORE DATA
NOW THAN WE EVER HAVE
BEFORE
CONNECTIONS BETWEEN OUR
DATA ARE GROWING ALL THE
TIME
4. THE
WHY WE ARE STORING MORE DATA
NOW THAN WE EVER HAVE
BEFORE
CONNECTIONS BETWEEN OUR
DATA ARE GROWING ALL THE
TIME
WE DON’T MAKE THINGS
KNOWING THE STRUCTURE
FROM DAY 1
5. THE
WHY WE ARE STORING MORE DATA
NOW THAN WE EVER HAVE
BEFORE
CONNECTIONS BETWEEN OUR
DATA ARE GROWING ALL THE
TIME
WE DON’T MAKE THINGS
KNOWING THE STRUCTURE
FROM DAY 1
SERVER ARCHITECTURE IS NOW
AT A STAGE WHERE WE CAN
TAKE ADVANTAGE OF IT
6. salary lists
most web applications
social networks
semantic trading
SiZE
Complexity
relational databases
7. NOSQL
USE CASES
LARGE DATA VOLUMES
MASSIVELY DISTRIBUTED ARCHITECTURE
REQUIRED TO STORE THE DATA
GOOGLE, AMAZON, FACEBOOK, 100K SERVERS
8. NOSQL
USE CASES
LARGE DATA VOLUMES
MASSIVELY DISTRIBUTED ARCHITECTURE
REQUIRED TO STORE THE DATA
GOOGLE, AMAZON, FACEBOOK, 100K SERVERS
EXTREME QUERY WORKLOAD
IMPOSSIBLE TO EFFICIENTLY DO JOINS AT THAT
SCALE WITH AN RDBMS
9. NOSQL
USE CASES
LARGE DATA VOLUMES
MASSIVELY DISTRIBUTED ARCHITECTURE
REQUIRED TO STORE THE DATA
GOOGLE, AMAZON, FACEBOOK, 100K SERVERS
EXTREME QUERY WORKLOAD
IMPOSSIBLE TO EFFICIENTLY DO JOINS AT THAT
SCALE WITH AN RDBMS
SCHEMA EVOLUTION
SCEMA FLEXIBILITY IS NOT TRIVIAL AT A LARGE
SCALE BUT IT CAN BE WITH NO SQL
11. NOSQL
PROS AND CONS
PROS
MASSIVE SCALABILITY
HIGH AVAILABILITY
LOWER COST
SCHEMA FLEXIBILITY
SPARCE AND SEMI STRUCTURED DATA
CONS
LIMITED QUERY CAPABILITIES
NOT STANDARDISED (PORTABILITY MAY BE AN ISSUE)
STILL A DEVELOPING TECHNOLOGY
12. OSQL NOSQL NOSQL NOSQL
QL BIGTABLE NOSQL NOSQL
QL NOSQL NOSQL NOSQL N
OSQL NOSQL KEY VALUE NO
SQL NOSQL NOSQL NOSQL N
NOSQL NOSQL NOSQL NOS
NOSQL NOSQL NOSQL NOSQ
QL NOSQL NOSQL NOSQL NO
GRAPHDB NOSQL NOSQL N
NOSQL NOSQL NOSQL NOS
OSQL NOSQL NOSQL NOSQL
SQL NOSQL DOCUMENT NOS
FOUREMERGING TRENDS IN
NOSQL DATABASES
13. BUT FIRST…
IMAGINE A LIBRARY
LOTS OF DIFFERENT FLOORS
DIFFERENT SECTIONS ON EACH FLOOR
DIFFERENT BOOKSHELVES IN EACH SECTION
LOTS OF BOOKS ON EACH SHELF
LOTS OF PAGES IN EACH BOOK
LOTS OF WORDS ON EACH PAGE
EVERYTHING IS WELL ORGANISED
AND EVERYTHING HAS A SPACE
14. BUT FIRST…
IMAGINE A LIBRARY
WHAT HAPPENS IF WE
BUY TOO MANY BOOKS!?
(THE WORLD EXPLODES AND THE KITTENS WIN)
15. BUT FIRST…
IMAGINE A LIBRARY
WHAT HAPPENS IF WE WANT TO
STORE CDS ALL OF A SUDDEN!?
(THE WORLD EXPLODES AND THE KITTENS WIN)
16. BUT FIRST…
IMAGINE A LIBRARY
WHAT HAPPENS IF WE WANT
TO GET RID OF ALL BOOKS
THAT MENTION KITTENS
(KITTENS STILL WIN)
17. BIG
BEHAVES LIKE A STANDARD RELATIONAL
DATABASE BUT WITH A SLIGHT CHANGE
http://research.google.com/archive/bigtable.html
http://research.google.com/archive/spanner.html
DESIGNED TO WORK WITH A LOT OF
DATA…A REALLY BIG CRAP TON
CREATED BY GOOGLE AND NOW USED
BY LOTS OF OTHERS
TABLE
18. THIS IS A STANDARD
RELATIONAL
DATABASE
BIG
TABLE
THIS IS A BIG
TABLE DATABASE
(AND NOW THE NAME MAKES SENCE!)
19. BIG
TABLE
“A Bigtable is a sparse, distributed, persistent
multidimensional sorted map. The map is indexed by a
row key, column key, and a timestamp; each value in
the map is an uninterpreted array of bytes.”
20. BIG
TABLE
“A Bigtable is a sparse, distributed, persistent
multidimensional sorted map. The map is indexed by a
row key, column key, and a timestamp; each value in
the map is an uninterpreted array of bytes.”
21. BIG
TABLE
“A Bigtable is a sparse, distributed, persistent
multidimensional sorted map. The map is indexed by a
row key, column key, and a timestamp; each value in
the map is an uninterpreted array of bytes.”
22. KEY
VALUE
AGAIN, DESIGNED TO WORK WITH A LOT
OF DATA
EACH BIT OF DATA IS STORED IN A
SINGLE COLLECTION
EACH COLLECTION CAN HAVE DIFFERENT
TYPES OF DATA
26. DOCUMENT
STORE
DESIGNED TO WORK WITH A LOT OF
DATA (BEGINNING TO NOTICE A THEME?)
VERY SIMILAR TO A KEY VALUE DATABASE
MAIN DIFFERENCE IS THAT YOU CAN
ACTUALLY SEE THE VALUES
30. SIDENOTE
colour: tabby
name: Gunther
colour: ginger
name: Mylo
colour: grey
name: Ruffus
age: kitten
colour: ginger(ish)
name: Fred
age: kitten
colour: ginger(ish)
name: Quentin
legs: 3
WE CAN ADD IN
FIELDS AS AND
WHEN WE
NEED THEM
33. GRAPH
DATABASE
FOCUS HERE IS ON MODELLING THE
STRUCTURE OF THE DATA
INSPIRED BY GRAPH THEORY (GO MATHS!)
SCALES REALLY WELL TO THE
STRUCTURE OF THE DATA
43. THE BASICS
High availability and disaster recovery are a must
Understand the pros and cons of each design model
Don’t pick something just because it is new
Do you remember the zune?
Don’t pick something based JUST on performance
44. SQL
High performance for transactions. Think ACID
Highly structured, very portable
Small amounts of data
SMALL IS LESS THAN 500GB
Supports many tables with different types of data
Can fetch ordered data
Compatible with lots of tools
THE GOOD
46. SQL
High performance for transactions. Think ACID
Highly structured, very portable
Small amounts of data
SMALL IS LESS THAN 500GB
Supports many tables with different types of data
Can fetch ordered data
Compatible with lots of tools
THE GOOD
47. SQL
Complex queries take a long time
The relational model takes a long time to learn
Not really scalable
Not suited for rapid development
THE BAD
48. noSQL
Fits well for volatile data
High read and write throughput
Scales really well
Rapid development is possible
In general it’s faster than SQL
THE GOOD
50. noSQL
Fits well for volatile data
High read and write throughput
Scales really well
Rapid development is possible
In general it’s faster than SQL
THE GOOD
51. noSQL
Key/Value pairs need to be packed/unpacked all the time
Still working on getting security for these working as well as SQL
Lack of relations from one key to another
THE GOOD
52. tl;dr
so use both, but think about when you want to use them!
works great, can’t scale for large data
works great, doesn't fit all situations
SQL
noSQL
53. A lot of this content is loving ripped from
lots of other (more impressive)
presentations that are already on
SlideShare - you should check them out!
FINALLY