Dapper.NET is a micro-ORM that provides simple methods for querying and mapping data from databases. It allows for CRUD operations, batch inserts, stored procedures, views, and transaction support. Dapper is lightweight, with a single file and less than 700 lines of code. It provides fast and pure SQL functionality by enriching IDbCommand with extension methods. Queries can map results to POCOs or dynamic objects. Additional extensions like Dapper Contrib provide more advanced features.
The document discusses aspect-oriented programming (AOP) and key AOP concepts like joinpoints, pointcuts, advice, and aspects. It explains how AOP addresses cross-cutting concerns in code through separation of concerns using pointcuts, advice, and aspects rather than scattering code throughout a system. The document also provides examples of how to configure AOP using Spring AOP through pointcuts, advice definitions, and proxies.
This document provides an overview of Spring MVC including:
- Spring MVC is a web framework built on the Servlet API that uses the MVC pattern. It features a DispatcherServlet that handles requests and delegates to controllers.
- The request processing workflow in Spring MVC involves the DispatcherServlet dispatching tasks to controllers, which interact with services and return a view name. The view is then rendered using a ViewResolver.
- Spring MVC applications use a WebApplicationContext containing web-related beans like controllers and mappings, which can override beans in the root context. Configuration can be done via XML or Java-based approaches. Important annotations map requests and bind parameters.
This document provides an overview of Spring and Spring Boot frameworks. It discusses the history of Java and Spring, how Spring provides inversion of control and dependency injection. It also covers Spring MVC for web applications, Spring Data for data access, and how Spring Boot aims to simplify configuration. The document concludes with discussing some next steps including looking at Spring Security, Spring Cloud, and using Spring with other JVM languages.
Is it easier to add functional programming features to a query language, or to add query capabilities to a functional language? In Morel, we have done the latter.
Functional and query languages have much in common, and yet much to learn from each other. Functional languages have a rich type system that includes polymorphism and functions-as-values and Turing-complete expressiveness; query languages have optimization techniques that can make programs several orders of magnitude faster, and runtimes that can use thousands of nodes to execute queries over terabytes of data.
Morel is an implementation of Standard ML on the JVM, with language extensions to allow relational expressions. Its compiler can translate programs to relational algebra and, via Apache Calcite’s query optimizer, run those programs on relational backends.
In this talk, we describe the principles that drove Morel’s design, the problems that we had to solve in order to implement a hybrid functional/relational language, and how Morel can be applied to implement data-intensive systems.
(A talk given by Julian Hyde at Strange Loop 2021, St. Louis, MO, on October 1st, 2021.)
Getting Started ASP.NET Core Training ,Tutorial - Beginner to AdvanceDot Net Tricks
Join ASP.NET Core course training which is primarily designed for Beginners and Professionals who want to develop cloud-based web applications using ASP.NET Core framework and MVC design pattern
We all have tasks from time to time for bulk-loading external data into MySQL. What's the best way of doing this? That's the task I faced recently when I was asked to help benchmark a multi-terrabyte database. We had to find the most efficient method to reload test data repeatedly without taking days to do it each time. In my presentation, I'll show you several alternative methods for bulk data loading, and describe the practical steps to use them efficiently. I'll cover SQL scripts, the mysqlimport tool, MySQL Workbench import, the CSV storage engine, and the Memcached API. I'll also give MySQL tuning tips for data loading, and how to use multi-threaded clients.
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...Databricks
Stateful processing is one of the most challenging aspects of distributed, fault-tolerant stream processing. The DataFrame APIs in Structured Streaming make it very easy for the developer to express their stateful logic, either implicitly (streaming aggregations) or explicitly (mapGroupsWithState). However, there are a number of moving parts under the hood which makes all the magic possible. In this talk, I am going to dive deeper into how stateful processing works in Structured Streaming.
In particular, I’m going to discuss the following.
• Different stateful operations in Structured Streaming
• How state data is stored in a distributed, fault-tolerant manner using State Stores
• How you can write custom State Stores for saving state to external storage systems.
Dapper caches query information like SQL statements and parameters to improve performance when materializing objects from query results. The cache is stored in a ConcurrentDictionary that is never flushed, so it could cause memory issues with dynamically-generated SQL. Queries using parameters are preferred since the cache key depends on the SQL and parameters, allowing caching of the execution plan. Buffering determines if all rows are loaded into memory before iterating. QueryMultiple is used for queries returning multiple result sets. Dirty tracking with interfaces allows Dapper to detect whether updates actually changed data to skip unnecessary SQL generation.
This document provides an overview of ADO.NET Entity Framework (EF), which is an object-relational mapping (ORM) framework that allows .NET developers to work with relational data using domain-specific objects. It discusses key EF concepts like the entity data model, architecture, features, and lifecycle. The document explains that EF maps database tables and relationships to .NET classes and allows LINQ queries over object collections to retrieve and manipulate data, hiding SQL complexity. It also covers the ObjectContext class that manages database connections and entities.
Rasheed Amir presents on Spring Boot. He discusses how Spring Boot aims to help developers build production-grade Spring applications quickly with minimal configuration. It provides default functionality for tasks like embedding servers and externalizing configuration. Spring Boot favors convention over configuration and aims to get developers started quickly with a single focus. It also exposes auto-configuration for common Spring and related technologies so that applications can take advantage of them without needing to explicitly configure them.
Windows 10 Nt Heap Exploitation (English version)Angel Boy
The document discusses the Windows memory allocator and heap exploitation. It describes the core components and data structures of the NT heap, including the _HEAP structure, _HEAP_ENTRY chunks, BlocksIndex structure, and FreeLists. It also explains the differences between the backend and frontend allocators as well as how chunks of different sizes are managed.
Apache Hive is a rapidly evolving project which continues to enjoy great adoption in the big data ecosystem. As Hive continues to grow its support for analytics, reporting, and interactive query, the community is hard at work in improving it along with many different dimensions and use cases. This talk will provide an overview of the latest and greatest features and optimizations which have landed in the project over the last year. Materialized views, the extension of ACID semantics to non-ORC data, and workload management are some noteworthy new features.
We will discuss optimizations which provide major performance gains as well as integration with other big data technologies such as Apache Spark, Druid, and Kafka. The talk will also provide a glimpse of what is expected to come in the near future.
This document provides an overview and introduction to ASP.NET 5 and MVC 6. It discusses the history of ASP.NET and outlines improvements in ASP.NET 5, including being cross-platform, modular, faster, and using NuGet packages. MVC 6 unifies MVC, Web API, and Web Pages and uses view components instead of child actions. Tag helpers generate markup and validation helpers are also introduced.
The document discusses best practices for exception handling in Java applications. It recommends that exceptions should only be used for exceptional situations, be properly logged, and result in appropriate error responses. Business exceptions should be thrown for invalid user behavior, while technical exceptions occurring internally should be wrapped in business exceptions. Exceptions should have clear, descriptive names and result in the proper HTTP status codes. The document also provides examples of implementing localized exceptions, handling exceptions globally or at the controller level, and using SLF4J with Logback for logging.
Apache Spark is a fast and general engine for large-scale data processing. It provides a unified API for batch, interactive, and streaming data processing using in-memory primitives. A benchmark showed Spark was able to sort 100TB of data 3 times faster than Hadoop using 10 times fewer machines by keeping data in memory between jobs.
PySpark Programming | PySpark Concepts with Hands-On | PySpark Training | Edu...Edureka!
** PySpark Certification Training: https://www.edureka.co/pyspark-certification-training **
This Edureka tutorial on PySpark Programming will give you a complete insight of the various fundamental concepts of PySpark. Fundamental concepts include the following:
1. PySpark
2. RDDs
3. DataFrames
4. PySpark SQL
5. PySpark Streaming
6. Machine Learning (MLlib)
This talk introduces Spring's REST stack - Spring MVC, Spring HATEOAS, Spring Data REST, Spring Security OAuth and Spring Social - while refining an API to move higher up the Richardson maturity model
This Edureka "Node.js Express tutorial" will help you to learn the Node.js express fundamentals with examples. Express.js is flexible and minimal node.js web application framework that provides robust set of features to develop mobile and web applications. It facilitates the rapid development of node.js applications. Below are the topics covered in this tutorial:
1) Why Express.js?
2) What is Express.js?
3) Express Installation
4) Express Routes
5) Express Middlewares
Designing Structured Streaming Pipelines—How to Architect Things RightDatabricks
"Structured Streaming has proven to be the best platform for building distributed stream processing applications. Its unified SQL/Dataset/DataFrame APIs and Spark's built-in functions make it easy for developers to express complex computations. However, expressing the business logic is only part of the larger problem of building end-to-end streaming pipelines that interact with a complex ecosystem of storage systems and workloads. It is important for the developer to truly understand the business problem needs to be solved.
What are you trying to consume? Single source? Joining multiple streaming sources? Joining streaming with static data?
What are you trying to produce? What is the final output that the business wants? What type of queries does the business want to run on the final output?
When do you want it? When does the business want to the data? What is the acceptable latency? Do you really want to millisecond-level latency?
How much are you willing to pay for it? This is the ultimate question and the answer significantly determines how feasible is it solve the above questions.
These are the questions that we ask every customer in order to help them design their pipeline. In this talk, I am going to go through the decision tree of designing the right architecture for solving your problem."
MySQL users commonly ask: Here's my table, what indexes do I need? Why aren't my indexes helping me? Don't indexes cause overhead? This talk gives you some practical answers, with a step by step method for finding the queries you need to optimize, and choosing the best indexes for them.
CanSecWest 2017 - Port(al) to the iOS CoreStefan Esser
This document discusses a new iOS kernel exploitation technique that involves manipulating mach ports. It fills the kernel heap with pointers to mach ports, then overwrites those pointers to fake ports that point to attacker-controlled data structures. This allows calling kernel APIs and the Mach API using the fake ports to potentially execute arbitrary code or escalate privileges. The technique was previously private but was leaked in late 2016 and used in the Yalu jailbreak.
Meet Up - Spark Stream Processing + KafkaKnoldus Inc.
This document provides an overview of Spark Streaming concepts including:
- Streams are sequences of data elements made available over time that can be accessed sequentially
- Stream processing involves continuously and concurrently processing live data streams in micro-batches
- Spark Streaming provides scalable and fault-tolerant stream processing using a micro-batch architecture where streams are divided into batches that are processed through transformations on resilient distributed datasets (RDDs)
- Transformations on DStreams apply operations like map, filter, reduce to the underlying RDDs of each batch
Deep Dive into Stateful Stream Processing in Structured Streaming with Tathag...Databricks
Structured Streaming provides stateful stream processing capabilities in Spark SQL through built-in operations like aggregations and joins as well as user-defined stateful transformations. It handles state automatically through watermarking to limit state size by dropping old data. For arbitrary stateful logic, MapGroupsWithState requires explicit state management by the user.
Migrating Apache Spark ML Jobs to Spark + Tensorflow on KubeflowDatabricks
This document summarizes Holden Karau's presentation on augmenting Spark ML pipelines with Kubeflow and TensorFlow. The presentation explored splitting a Spark ML pipeline into feature preparation in Spark and model training in TensorFlow, saving the Spark output in a TF-compatible format, and executing the components as part of a Kubeflow pipeline that uses the Spark operator. It noted challenges with Kubeflow's current stability but provided options for integrating Spark jobs using the operator or notebooks. The presentation concluded by discussing alternatives to this approach and some ending notes of caution.
Real-time Analytics with Apache Flink and DruidJan Graßegger
This document discusses using Apache Flink and Druid for real-time analytics. It describes Druid as an online analytical processing (OLAP) system that is column-oriented, distributed, and uses built-in data sharding based on time windows. It also introduces Tranquility, which helps ingest real-time data into Druid from systems like Kafka, Spark, and Flink. The document proposes a processing architecture using Kafka, Flink, Druid and Tranquility, with HDFS for replays, to enable real-time reporting with capabilities for replays from HDFS and Kafka.
This document discusses the data access library Dapper and how it provides a simple yet high-performance way to query and manipulate data in databases. It begins by covering traditional data access methods in .NET and issues with ORMs. It then introduces Dapper as a micro-ORM that maps database rows to objects quickly using dynamic code generation. Key features covered include querying, loading related entities, paging results, and basic CRUD operations. The document encourages further reading on these topics.
This document compares ORM and micro-ORM approaches to data access in .NET applications. ORMs like Entity Framework provide more features and convenience but can impact performance, while micro-ORMs like Dapper are more lightweight and focused on speed. The document provides pros and cons of each approach and recommendations on when to use ORMs versus micro-ORMs based on the needs and priorities of different applications. Performance optimizations for ORMs like using projections and avoiding lazy loading are also discussed.
Dapper caches query information like SQL statements and parameters to improve performance when materializing objects from query results. The cache is stored in a ConcurrentDictionary that is never flushed, so it could cause memory issues with dynamically-generated SQL. Queries using parameters are preferred since the cache key depends on the SQL and parameters, allowing caching of the execution plan. Buffering determines if all rows are loaded into memory before iterating. QueryMultiple is used for queries returning multiple result sets. Dirty tracking with interfaces allows Dapper to detect whether updates actually changed data to skip unnecessary SQL generation.
This document provides an overview of ADO.NET Entity Framework (EF), which is an object-relational mapping (ORM) framework that allows .NET developers to work with relational data using domain-specific objects. It discusses key EF concepts like the entity data model, architecture, features, and lifecycle. The document explains that EF maps database tables and relationships to .NET classes and allows LINQ queries over object collections to retrieve and manipulate data, hiding SQL complexity. It also covers the ObjectContext class that manages database connections and entities.
Rasheed Amir presents on Spring Boot. He discusses how Spring Boot aims to help developers build production-grade Spring applications quickly with minimal configuration. It provides default functionality for tasks like embedding servers and externalizing configuration. Spring Boot favors convention over configuration and aims to get developers started quickly with a single focus. It also exposes auto-configuration for common Spring and related technologies so that applications can take advantage of them without needing to explicitly configure them.
Windows 10 Nt Heap Exploitation (English version)Angel Boy
The document discusses the Windows memory allocator and heap exploitation. It describes the core components and data structures of the NT heap, including the _HEAP structure, _HEAP_ENTRY chunks, BlocksIndex structure, and FreeLists. It also explains the differences between the backend and frontend allocators as well as how chunks of different sizes are managed.
Apache Hive is a rapidly evolving project which continues to enjoy great adoption in the big data ecosystem. As Hive continues to grow its support for analytics, reporting, and interactive query, the community is hard at work in improving it along with many different dimensions and use cases. This talk will provide an overview of the latest and greatest features and optimizations which have landed in the project over the last year. Materialized views, the extension of ACID semantics to non-ORC data, and workload management are some noteworthy new features.
We will discuss optimizations which provide major performance gains as well as integration with other big data technologies such as Apache Spark, Druid, and Kafka. The talk will also provide a glimpse of what is expected to come in the near future.
This document provides an overview and introduction to ASP.NET 5 and MVC 6. It discusses the history of ASP.NET and outlines improvements in ASP.NET 5, including being cross-platform, modular, faster, and using NuGet packages. MVC 6 unifies MVC, Web API, and Web Pages and uses view components instead of child actions. Tag helpers generate markup and validation helpers are also introduced.
The document discusses best practices for exception handling in Java applications. It recommends that exceptions should only be used for exceptional situations, be properly logged, and result in appropriate error responses. Business exceptions should be thrown for invalid user behavior, while technical exceptions occurring internally should be wrapped in business exceptions. Exceptions should have clear, descriptive names and result in the proper HTTP status codes. The document also provides examples of implementing localized exceptions, handling exceptions globally or at the controller level, and using SLF4J with Logback for logging.
Apache Spark is a fast and general engine for large-scale data processing. It provides a unified API for batch, interactive, and streaming data processing using in-memory primitives. A benchmark showed Spark was able to sort 100TB of data 3 times faster than Hadoop using 10 times fewer machines by keeping data in memory between jobs.
PySpark Programming | PySpark Concepts with Hands-On | PySpark Training | Edu...Edureka!
** PySpark Certification Training: https://www.edureka.co/pyspark-certification-training **
This Edureka tutorial on PySpark Programming will give you a complete insight of the various fundamental concepts of PySpark. Fundamental concepts include the following:
1. PySpark
2. RDDs
3. DataFrames
4. PySpark SQL
5. PySpark Streaming
6. Machine Learning (MLlib)
This talk introduces Spring's REST stack - Spring MVC, Spring HATEOAS, Spring Data REST, Spring Security OAuth and Spring Social - while refining an API to move higher up the Richardson maturity model
This Edureka "Node.js Express tutorial" will help you to learn the Node.js express fundamentals with examples. Express.js is flexible and minimal node.js web application framework that provides robust set of features to develop mobile and web applications. It facilitates the rapid development of node.js applications. Below are the topics covered in this tutorial:
1) Why Express.js?
2) What is Express.js?
3) Express Installation
4) Express Routes
5) Express Middlewares
Designing Structured Streaming Pipelines—How to Architect Things RightDatabricks
"Structured Streaming has proven to be the best platform for building distributed stream processing applications. Its unified SQL/Dataset/DataFrame APIs and Spark's built-in functions make it easy for developers to express complex computations. However, expressing the business logic is only part of the larger problem of building end-to-end streaming pipelines that interact with a complex ecosystem of storage systems and workloads. It is important for the developer to truly understand the business problem needs to be solved.
What are you trying to consume? Single source? Joining multiple streaming sources? Joining streaming with static data?
What are you trying to produce? What is the final output that the business wants? What type of queries does the business want to run on the final output?
When do you want it? When does the business want to the data? What is the acceptable latency? Do you really want to millisecond-level latency?
How much are you willing to pay for it? This is the ultimate question and the answer significantly determines how feasible is it solve the above questions.
These are the questions that we ask every customer in order to help them design their pipeline. In this talk, I am going to go through the decision tree of designing the right architecture for solving your problem."
MySQL users commonly ask: Here's my table, what indexes do I need? Why aren't my indexes helping me? Don't indexes cause overhead? This talk gives you some practical answers, with a step by step method for finding the queries you need to optimize, and choosing the best indexes for them.
CanSecWest 2017 - Port(al) to the iOS CoreStefan Esser
This document discusses a new iOS kernel exploitation technique that involves manipulating mach ports. It fills the kernel heap with pointers to mach ports, then overwrites those pointers to fake ports that point to attacker-controlled data structures. This allows calling kernel APIs and the Mach API using the fake ports to potentially execute arbitrary code or escalate privileges. The technique was previously private but was leaked in late 2016 and used in the Yalu jailbreak.
Meet Up - Spark Stream Processing + KafkaKnoldus Inc.
This document provides an overview of Spark Streaming concepts including:
- Streams are sequences of data elements made available over time that can be accessed sequentially
- Stream processing involves continuously and concurrently processing live data streams in micro-batches
- Spark Streaming provides scalable and fault-tolerant stream processing using a micro-batch architecture where streams are divided into batches that are processed through transformations on resilient distributed datasets (RDDs)
- Transformations on DStreams apply operations like map, filter, reduce to the underlying RDDs of each batch
Deep Dive into Stateful Stream Processing in Structured Streaming with Tathag...Databricks
Structured Streaming provides stateful stream processing capabilities in Spark SQL through built-in operations like aggregations and joins as well as user-defined stateful transformations. It handles state automatically through watermarking to limit state size by dropping old data. For arbitrary stateful logic, MapGroupsWithState requires explicit state management by the user.
Migrating Apache Spark ML Jobs to Spark + Tensorflow on KubeflowDatabricks
This document summarizes Holden Karau's presentation on augmenting Spark ML pipelines with Kubeflow and TensorFlow. The presentation explored splitting a Spark ML pipeline into feature preparation in Spark and model training in TensorFlow, saving the Spark output in a TF-compatible format, and executing the components as part of a Kubeflow pipeline that uses the Spark operator. It noted challenges with Kubeflow's current stability but provided options for integrating Spark jobs using the operator or notebooks. The presentation concluded by discussing alternatives to this approach and some ending notes of caution.
Real-time Analytics with Apache Flink and DruidJan Graßegger
This document discusses using Apache Flink and Druid for real-time analytics. It describes Druid as an online analytical processing (OLAP) system that is column-oriented, distributed, and uses built-in data sharding based on time windows. It also introduces Tranquility, which helps ingest real-time data into Druid from systems like Kafka, Spark, and Flink. The document proposes a processing architecture using Kafka, Flink, Druid and Tranquility, with HDFS for replays, to enable real-time reporting with capabilities for replays from HDFS and Kafka.
This document discusses the data access library Dapper and how it provides a simple yet high-performance way to query and manipulate data in databases. It begins by covering traditional data access methods in .NET and issues with ORMs. It then introduces Dapper as a micro-ORM that maps database rows to objects quickly using dynamic code generation. Key features covered include querying, loading related entities, paging results, and basic CRUD operations. The document encourages further reading on these topics.
This document compares ORM and micro-ORM approaches to data access in .NET applications. ORMs like Entity Framework provide more features and convenience but can impact performance, while micro-ORMs like Dapper are more lightweight and focused on speed. The document provides pros and cons of each approach and recommendations on when to use ORMs versus micro-ORMs based on the needs and priorities of different applications. Performance optimizations for ORMs like using projections and avoiding lazy loading are also discussed.
Dapper: the microORM that will change your lifeDavide Mauri
ORM or Stored Procedures? Code First or Database First? Ad-Hoc Queries? Impedance Mismatch? If you're a developer or you are a DBA working with developers you have heard all this terms at least once in your life…and usually in the middle of a strong discussion, debating about one or the other. Well, thanks to StackOverflow's Dapper, all these fights are finished. Dapper is a blazing fast microORM that allows developers to map SQL queries to classes automatically, leaving (and encouraging) the usage of stored procedures, parameterized statements and all the good stuff that SQL Server offers (JSON and TVP are supported too!) In this session I'll show how to use Dapper in your projects from the very basis to some more complex usages that will help you to create *really fast* applications without the burden of huge and complex ORMs. The days of Impedance Mismatch are finally over!
Object Relational Mapping with Dapper (Micro ORM)Muhammad Umar
Object relational mapping (ORM) is modern technique used to make incompatible type systems collaborate. Dapper is a micro ORM, specially known for its efficiency and simplicity.
dotNet Miami - June 21, 2012: Richie Rump: Entity Framework: Code First and M...dotNet Miami
dotNet Miami - June 21, 2012: Presented by Richie Rump: Traditionally, Entity Framework has used a designer and XML files to define the conceptual and storage models. Now with Entity Framework Code First we can ditch the XML files and define the data model directly in code. This session will give an overview of all of the awesomeness that is Code First including Data Annotations, Fluent API, DbContext and the new Migrations feature. Be prepared for a fast moving and interactive session filled with great information on how to access your data.
Entity Framework: Code First and Magic UnicornsRichie Rump
Entity Framework is an object-relational mapping framework that allows developers to work with relational data using domain-specific objects. It includes features like code first modeling, migrations, data annotations, and the DbContext API. Newer versions have added additional functionality like support for enums, performance improvements, and spatial data types. Resources for learning more include blogs by Julie Lerman and Rowan Miller as well as StackOverflow and PluralSight videos.
the .NET Framework. It provides the clafTesfahunMaru1
ADO.NET (ActiveX Data Objects .NET) is the primary data access API for
the .NET Framework. It provides the classes that you use as you develop
database applications with Visual Basic .NET as well as other .NET la
ADO.NET provides a set of classes for working with data in .NET applications. It offers improvements over ADO such as support for disconnected data access, XML transport of data, and a programming model designed for modern applications. The core classes of ADO.NET include the Connection class for establishing a connection to a data source, the Command class for executing queries and stored procedures, the DataReader class for sequential access to query results, and the DataAdapter class for populating a DataSet and updating data in the data source. Developers use ADO.NET to connect to databases, retrieve data using DataAdapters, generate DataSets to store and manipulate the data, and display it using list-bound controls like DropDownLists and
Object Relational Mapping In Real World ApplicationsPhilWinstanley
Object/Relational Mapping (ORM) is a technique that maps object-oriented classes and properties to relational database tables and fields. ORM leverages existing object-oriented programming skills and provides an abstraction layer over databases. There are important differences between ORM and traditional data access approaches like ADO.NET in areas like data binding, business logic implementation, and support for multi-tier architectures.
The document discusses the impedance mismatch between object-oriented programming and relational databases. It notes key differences in their type systems, architectural styles, structural relationships, identity constructs, transaction boundaries, and query capabilities. Object-relational mapping (ORM) is introduced as a technique for converting between incompatible data types in databases and object-oriented languages. LINQ provides language-integrated querying for relational data through DLINQ, mapping tables to classes and allowing queries over strongly-typed results.
Freeing Yourself from an RDBMS ArchitectureDavid Hoerster
Explore how we can begin to move functionality from a typical RDBMS application to one that uses tools and frameworks like MongoDB, Solr and Redis. At the end, the architecture we've evolved looks similar to.........
This document provides an overview of document databases and MongoDB. It discusses key concepts of document databases like dynamic schemas, embedding of related data, and lack of joins. Benefits include scalability, flexibility in data modeling, and performance. The document outlines MongoDB internals such as replication, sharding, and BSON data storage format. It also promotes MongoDB as the most popular open-source document database and provides links for additional .NET resources.
The document discusses LINQ (Language Integrated Query), which allows querying of data from various sources in .NET using a common language integrated into C# and VB.NET. It covers the context and motivation for LINQ, its architecture and usage with different data sources like XML, relational databases, and web services. It also discusses LINQ query operations, performance considerations, customizations, alternatives to LINQ, and new features in LINQ for .NET Framework 4.0.
ADO.NET is a data access technology that allows applications to connect to and manipulate data from various data sources. It provides a common object model for data access that can be used across different database systems through data providers. The core objects in ADO.NET include the Connection, Command, DataReader, DataAdapter and DataSet. Data can be accessed in ADO.NET using either a connected or disconnected model. The disconnected model uses a DataSet to cache data locally, while the connected model directly executes commands against an open connection.
ADO.NET is a set of libraries included with the .NET Framework that help communicate with various data stores from .NET applications. The ADO.NET libraries include classes for connecting to a data source, submitting queries, and processing results. ADO.NET also allows for disconnected data access using objects like the DataSet which allows data to be cached and edited offline. The core ADO.NET objects include connections, commands, data readers, data adapters and data sets which provide functionality similar to but also improvements over ADO.
The document discusses some of the shortcomings of object-relational mapping (ORM) frameworks. It argues that ORMs can lead developers to go fast initially but end up going slow, as ORMs don't allow for efficient database use and can result in poor code quality. The document also explains that relational data is about subsets of data, not individual objects, and that developers should take control of SQL and understand database concepts rather than relying solely on ORM frameworks.
This document provides an overview of ADO.net and how it can be used to connect front-end applications like C# to back-end databases. It discusses the key classes and components in ADO.net like Connection, Command, DataAdapter and DataSet. It provides examples of performing basic CRUD operations on databases using both connection-oriented and disconnected models. It also demonstrates how to retrieve and display data in controls like DataGridView and bind related tables using DataRelations in a DataSet.
In-App Guidance_ Save Enterprises Millions in Training & IT Costs.pptxaptyai
Discover how in-app guidance empowers employees, streamlines onboarding, and reduces IT support needs-helping enterprises save millions on training and support costs while boosting productivity.
Slack like a pro: strategies for 10x engineering teamsNacho Cougil
You know Slack, right? It's that tool that some of us have known for the amount of "noise" it generates per second (and that many of us mute as soon as we install it 😅).
But, do you really know it? Do you know how to use it to get the most out of it? Are you sure 🤔? Are you tired of the amount of messages you have to reply to? Are you worried about the hundred conversations you have open? Or are you unaware of changes in projects relevant to your team? Would you like to automate tasks but don't know how to do so?
In this session, I'll try to share how using Slack can help you to be more productive, not only for you but for your colleagues and how that can help you to be much more efficient... and live more relaxed 😉.
If you thought that our work was based (only) on writing code, ... I'm sorry to tell you, but the truth is that it's not 😅. What's more, in the fast-paced world we live in, where so many things change at an accelerated speed, communication is key, and if you use Slack, you should learn to make the most of it.
---
Presentation shared at JCON Europe '25
Feedback form:
http://tiny.cc/slack-like-a-pro-feedback
Slides of Limecraft Webinar on May 8th 2025, where Jonna Kokko and Maarten Verwaest discuss the latest release.
This release includes major enhancements and improvements of the Delivery Workspace, as well as provisions against unintended exposure of Graphic Content, and rolls out the third iteration of dashboards.
Customer cases include Scripted Entertainment (continuing drama) for Warner Bros, as well as AI integration in Avid for ITV Studios Daytime.
AI-proof your career by Olivier Vroom and David WIlliamsonUXPA Boston
This talk explores the evolving role of AI in UX design and the ongoing debate about whether AI might replace UX professionals. The discussion will explore how AI is shaping workflows, where human skills remain essential, and how designers can adapt. Attendees will gain insights into the ways AI can enhance creativity, streamline processes, and create new challenges for UX professionals.
AI’s influence on UX is growing, from automating research analysis to generating design prototypes. While some believe AI could make most workers (including designers) obsolete, AI can also be seen as an enhancement rather than a replacement. This session, featuring two speakers, will examine both perspectives and provide practical ideas for integrating AI into design workflows, developing AI literacy, and staying adaptable as the field continues to change.
The session will include a relatively long guided Q&A and discussion section, encouraging attendees to philosophize, share reflections, and explore open-ended questions about AI’s long-term impact on the UX profession.
Join us for the Multi-Stakeholder Consultation Program on the Implementation of Digital Nepal Framework (DNF) 2.0 and the Way Forward, a high-level workshop designed to foster inclusive dialogue, strategic collaboration, and actionable insights among key ICT stakeholders in Nepal. This national-level program brings together representatives from government bodies, private sector organizations, academia, civil society, and international development partners to discuss the roadmap, challenges, and opportunities in implementing DNF 2.0. With a focus on digital governance, data sovereignty, public-private partnerships, startup ecosystem development, and inclusive digital transformation, the workshop aims to build a shared vision for Nepal’s digital future. The event will feature expert presentations, panel discussions, and policy recommendations, setting the stage for unified action and sustained momentum in Nepal’s digital journey.
Crazy Incentives and How They Kill Security. How Do You Turn the Wheel?Christian Folini
Everybody is driven by incentives. Good incentives persuade us to do the right thing and patch our servers. Bad incentives make us eat unhealthy food and follow stupid security practices.
There is a huge resource problem in IT, especially in the IT security industry. Therefore, you would expect people to pay attention to the existing incentives and the ones they create with their budget allocation, their awareness training, their security reports, etc.
But reality paints a different picture: Bad incentives all around! We see insane security practices eating valuable time and online training annoying corporate users.
But it's even worse. I've come across incentives that lure companies into creating bad products, and I've seen companies create products that incentivize their customers to waste their time.
It takes people like you and me to say "NO" and stand up for real security!
A national workshop bringing together government, private sector, academia, and civil society to discuss the implementation of Digital Nepal Framework 2.0 and shape the future of Nepal’s digital transformation.
Digital Technologies for Culture, Arts and Heritage: Insights from Interdisci...Vasileios Komianos
Keynote speech at 3rd Asia-Europe Conference on Applied Information Technology 2025 (AETECH), titled “Digital Technologies for Culture, Arts and Heritage: Insights from Interdisciplinary Research and Practice". The presentation draws on a series of projects, exploring how technologies such as XR, 3D reconstruction, and large language models can shape the future of heritage interpretation, exhibition design, and audience participation — from virtual restorations to inclusive digital storytelling.
Longitudinal Benchmark: A Real-World UX Case Study in Onboarding by Linda Bor...UXPA Boston
This is a case study of a three-part longitudinal research study with 100 prospects to understand their onboarding experiences. In part one, we performed a heuristic evaluation of the websites and the getting started experiences of our product and six competitors. In part two, prospective customers evaluated the website of our product and one other competitor (best performer from part one), chose one product they were most interested in trying, and explained why. After selecting the one they were most interested in, we asked them to create an account to understand their first impressions. In part three, we invited the same prospective customers back a week later for a follow-up session with their chosen product. They performed a series of tasks while sharing feedback throughout the process. We collected both quantitative and qualitative data to make actionable recommendations for marketing, product development, and engineering, highlighting the value of user-centered research in driving product and service improvements.
Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...Safe Software
FME is renowned for its no-code data integration capabilities, but that doesn’t mean you have to abandon coding entirely. In fact, Python’s versatility can enhance FME workflows, enabling users to migrate data, automate tasks, and build custom solutions. Whether you’re looking to incorporate Python scripts or use ArcPy within FME, this webinar is for you!
Join us as we dive into the integration of Python with FME, exploring practical tips, demos, and the flexibility of Python across different FME versions. You’ll also learn how to manage SSL integration and tackle Python package installations using the command line.
During the hour, we’ll discuss:
-Top reasons for using Python within FME workflows
-Demos on integrating Python scripts and handling attributes
-Best practices for startup and shutdown scripts
-Using FME’s AI Assist to optimize your workflows
-Setting up FME Objects for external IDEs
Because when you need to code, the focus should be on results—not compatibility issues. Join us to master the art of combining Python and FME for powerful automation and data migration.
BR Softech is a leading hyper-casual game development company offering lightweight, addictive games with quick gameplay loops. Our expert developers create engaging titles for iOS, Android, and cross-platform markets using Unity and other top engines.
UiPath AgentHack - Build the AI agents of tomorrow_Enablement 1.pptxanabulhac
Join our first UiPath AgentHack enablement session with the UiPath team to learn more about the upcoming AgentHack! Explore some of the things you'll want to think about as you prepare your entry. Ask your questions.
Harmonizing Multi-Agent Intelligence | Open Data Science Conference | Gary Ar...Gary Arora
This deck from my talk at the Open Data Science Conference explores how multi-agent AI systems can be used to solve practical, everyday problems — and how those same patterns scale to enterprise-grade workflows.
I cover the evolution of AI agents, when (and when not) to use multi-agent architectures, and how to design, orchestrate, and operationalize agentic systems for real impact. The presentation includes two live demos: one that books flights by checking my calendar, and another showcasing a tiny local visual language model for efficient multimodal tasks.
Key themes include:
✅ When to use single-agent vs. multi-agent setups
✅ How to define agent roles, memory, and coordination
✅ Using small/local models for performance and cost control
✅ Building scalable, reusable agent architectures
✅ Why personal use cases are the best way to learn before deploying to the enterprise
Building Connected Agents: An Overview of Google's ADK and A2A ProtocolSuresh Peiris
Google's Agent Development Kit (ADK) provides a framework for building AI agents, including complex multi-agent systems. It offers tools for development, deployment, and orchestration.
Complementing this, the Agent2Agent (A2A) protocol is an open standard by Google that enables these AI agents, even if from different developers or frameworks, to communicate and collaborate effectively. A2A allows agents to discover each other's capabilities and work together on tasks.
In essence, ADK helps create the agents, and A2A provides the common language for these connected agents to interact and form more powerful, interoperable AI solutions.
2. Data Access
How did we get here?
Why is this so hard?
public class Foo
{
//…some properties
}
SQL Server
3. Data Access in .NET
• In the beginning there was System.Data
• IDbConnection
• IDbCommand
• IDataReader
• IDbTransaction
4. Sytem.Data - Reading Records
var results = new List<Employee>();
using (SqlConnection connection = new SqlConnection(Settings.ConnectionString))
{
SqlCommand command = new SqlCommand(“SELECT * FROM Employees”, connection);
connection.Open();
IDataReader reader = command.ExecuteReader();
while (reader.Read())
{
results.Add(ReadSingleRow(reader));
}
reader.Close();
}
return results;
5. The ORMs will save us
• Hide details of database
• Generate SQL based on object model and configuration
• ChangeTracking
• Complex mapping strategies
• Many-to-many relationships
• Inheritance
7. ORM Pain Points
• Black box code generation –What is going on?
• Performance Problems
• Eager Loading vs Lazy Loading
• Complex Inheritance Chains
• Dealing with disconnected entities (in a web context)
8. The Micro-ORMs will save us!
• Query --> Objects
• Usually focused on speed
• Light on features
10. Dapper
• Used in production at Stack Exchange
• Super fast and easy to use
11. Performance of SELECT mapping over 500
iterations - POCO serialization
Method Duration
Hand coded (using a SqlDataReader) 47ms
Dapper ExecuteMapperQuery 49ms
ServiceStack.OrmLite (QueryById) 50ms
PetaPoco 52ms
BLToolkit 80ms
SubSonic CodingHorror 107ms
NHibernate SQL 104ms
Linq 2 SQL ExecuteQuery 181ms
Entity framework ExecuteStoreQuery 631ms
https://github.com/StackExchange/dapper-dot-net#performance-of-select-mapping-over-500-iterations---dynamic-serialization
12. How can it be that fast?
• Dapper dynamically writes code for you
• Emits IL code for tasks like loading a data
reader into an object
• https://github.com/StackExchange/Dapper/blo
b/master/Dapper/SqlMapper.cs#L3078
13. Dapper works on Database Connections
A set of extension methods on IDbConnection
Aircraft aircraft;
using (var connection = new SqlConnection(_connectionString))
{
await connection.OpenAsync(); //Optional
var query = "SELECT * FROM Aircraft WHERE Id = @Id";
Aircraft aircraft = await connection.QuerySingleAsync<Aircraft>(query, new {Id = id});
}
15. Loading Related Objects – Multi-Mapping
• Write a single query that returns all the data in a single row
scheduledFlights =
await connection.QueryAsync<ScheduledFlight, Airport, ScheduledFlight>(query,
(flight, airport) => {
flight.Airport = airport;
return flight;
},
new{FromCode = from} );
16. Multi-Mapping Caveats
• Data duplication
• Query returns a bloated data set
• Multiple instances of an object that represent the same thing
• This is totally fine forOne-to-One relationships
• No duplication here
17. Loading Related Objects – Multiple Queries
• Get data using multiple queries and wire them up yourself
• Still executed in a single command that returns multiple results sets
using (var multi = await connection.QueryMultipleAsync(query, new{FromCode = from} ))
{
scheduledFlights = multi.Read<ScheduledFlight>();
var airports = multi.Read<Airport>().ToDictionary(a => a.Id);
foreach(var flight in scheduledFlights)
{
flight.Airport = airports[flight.AirportId];
}
}
18. Multi-Mapping vs Multiple Queries
100 ScheduledFlight Records 1,000 ScheduledFlight Records
Method Mean
MultiMapping 926.5 us
MultipleResultSets 705.9 us
Method Mean
MultiMapping 5.098 ms
MultipleResultSets 2.809 ms
21. Paging through large collections
• Use an OFFSET FETCH query
• Include a total count in the result
SELECT * FROM Flight f
INNER JOIN ScheduledFlight sf
ON f.ScheduledFlightId = sf.Id
ORDER BY Day, FlightNumber
OFFSET 0 ROWS
FETCH NEXT 10 ROWS ONLY
SELECT COUNT(*) FROM Flight f
INNER JOIN ScheduledFlight sf
ON f.ScheduledFlightId = sf.Id
22. Insert / Update / Delete
• Use the ExecuteAsync method
• Built-in support for batch inserts
await connection.ExecuteAsync(
@"INSERT INTO Flight(ScheduledFlightId, Day, ScheduledDeparture,
ScheduledArrival)
VALUES(@ScheduledFlightId, @Day, @ScheduledDeparture, @ScheduledArrival)",
flights);
#3: Data access has consumed such a huge amount of our time.
- Why is that?
- What is so hard about this?
#4: These 4 interfaces were in the original .NET framework.
They have been the basis for ALL database access since then.
Each database provider has an implementation of each of these interfaces (eg. SqlConnection)
Every ORM or Micro-ORM written in .NET is based on these 4 interfaces.
#5: Show sample code for reading and writing data using the raw SQL
#20: Demo
Start with loading simply the first level (Airport + Arrival and Departure Flights)
Using multi-mapping to load the ScheduledFlight info
Use multi queries to also load each ScheduledFlight’s Arrival and Departure airports